Dec 01 00:06:18 crc systemd[1]: Starting Kubernetes Kubelet... Dec 01 00:06:18 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:18 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 01 00:06:19 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 01 00:06:19 crc kubenswrapper[4846]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 00:06:19 crc kubenswrapper[4846]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 01 00:06:19 crc kubenswrapper[4846]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 00:06:19 crc kubenswrapper[4846]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 00:06:19 crc kubenswrapper[4846]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 01 00:06:19 crc kubenswrapper[4846]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.363544 4846 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.367946 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.367972 4846 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.367979 4846 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.367985 4846 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.367991 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.367996 4846 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368003 4846 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368009 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368014 4846 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368021 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368029 4846 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368036 4846 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368043 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368050 4846 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368057 4846 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368065 4846 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368072 4846 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368078 4846 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368083 4846 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368088 4846 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368093 4846 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368099 4846 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368104 4846 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368109 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368114 4846 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368119 4846 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368124 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368129 4846 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368134 4846 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368140 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368145 4846 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368151 4846 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368156 4846 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368161 4846 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368167 4846 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368172 4846 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368179 4846 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368187 4846 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368193 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368199 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368205 4846 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368212 4846 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368217 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368223 4846 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368228 4846 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368236 4846 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368243 4846 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368250 4846 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368258 4846 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368266 4846 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368271 4846 feature_gate.go:330] unrecognized feature gate: Example Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368276 4846 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368281 4846 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368286 4846 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368292 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368297 4846 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368301 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368306 4846 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368311 4846 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368318 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368322 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368330 4846 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368337 4846 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368343 4846 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368348 4846 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368354 4846 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368359 4846 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368364 4846 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368369 4846 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368375 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.368380 4846 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.368928 4846 flags.go:64] FLAG: --address="0.0.0.0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.368945 4846 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.368959 4846 flags.go:64] FLAG: --anonymous-auth="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.368978 4846 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.368987 4846 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.368993 4846 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369004 4846 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369012 4846 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369020 4846 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369026 4846 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369033 4846 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369039 4846 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369045 4846 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369053 4846 flags.go:64] FLAG: --cgroup-root="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369060 4846 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369068 4846 flags.go:64] FLAG: --client-ca-file="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369075 4846 flags.go:64] FLAG: --cloud-config="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369081 4846 flags.go:64] FLAG: --cloud-provider="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369087 4846 flags.go:64] FLAG: --cluster-dns="[]" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369096 4846 flags.go:64] FLAG: --cluster-domain="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369103 4846 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369110 4846 flags.go:64] FLAG: --config-dir="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369116 4846 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369125 4846 flags.go:64] FLAG: --container-log-max-files="5" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369135 4846 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369142 4846 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369148 4846 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369157 4846 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369163 4846 flags.go:64] FLAG: --contention-profiling="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369171 4846 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369178 4846 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369185 4846 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369191 4846 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369199 4846 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369206 4846 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369212 4846 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369219 4846 flags.go:64] FLAG: --enable-load-reader="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369225 4846 flags.go:64] FLAG: --enable-server="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369231 4846 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369239 4846 flags.go:64] FLAG: --event-burst="100" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369248 4846 flags.go:64] FLAG: --event-qps="50" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369255 4846 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369261 4846 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369269 4846 flags.go:64] FLAG: --eviction-hard="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369277 4846 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369283 4846 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369290 4846 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369296 4846 flags.go:64] FLAG: --eviction-soft="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369303 4846 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369310 4846 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369318 4846 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369325 4846 flags.go:64] FLAG: --experimental-mounter-path="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369331 4846 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369338 4846 flags.go:64] FLAG: --fail-swap-on="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369345 4846 flags.go:64] FLAG: --feature-gates="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369356 4846 flags.go:64] FLAG: --file-check-frequency="20s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369363 4846 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369370 4846 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369377 4846 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369384 4846 flags.go:64] FLAG: --healthz-port="10248" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369391 4846 flags.go:64] FLAG: --help="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369397 4846 flags.go:64] FLAG: --hostname-override="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369404 4846 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369410 4846 flags.go:64] FLAG: --http-check-frequency="20s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369417 4846 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369423 4846 flags.go:64] FLAG: --image-credential-provider-config="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369430 4846 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369436 4846 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369442 4846 flags.go:64] FLAG: --image-service-endpoint="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369448 4846 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369455 4846 flags.go:64] FLAG: --kube-api-burst="100" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369461 4846 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369468 4846 flags.go:64] FLAG: --kube-api-qps="50" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369475 4846 flags.go:64] FLAG: --kube-reserved="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369481 4846 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369487 4846 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369495 4846 flags.go:64] FLAG: --kubelet-cgroups="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369502 4846 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369507 4846 flags.go:64] FLAG: --lock-file="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369514 4846 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369520 4846 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369527 4846 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369537 4846 flags.go:64] FLAG: --log-json-split-stream="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369543 4846 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369549 4846 flags.go:64] FLAG: --log-text-split-stream="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369556 4846 flags.go:64] FLAG: --logging-format="text" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369562 4846 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369569 4846 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369575 4846 flags.go:64] FLAG: --manifest-url="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369581 4846 flags.go:64] FLAG: --manifest-url-header="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369589 4846 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369595 4846 flags.go:64] FLAG: --max-open-files="1000000" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369603 4846 flags.go:64] FLAG: --max-pods="110" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369609 4846 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369615 4846 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369621 4846 flags.go:64] FLAG: --memory-manager-policy="None" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369628 4846 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369634 4846 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369640 4846 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369647 4846 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369669 4846 flags.go:64] FLAG: --node-status-max-images="50" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369676 4846 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369702 4846 flags.go:64] FLAG: --oom-score-adj="-999" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369709 4846 flags.go:64] FLAG: --pod-cidr="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369716 4846 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369727 4846 flags.go:64] FLAG: --pod-manifest-path="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369733 4846 flags.go:64] FLAG: --pod-max-pids="-1" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369740 4846 flags.go:64] FLAG: --pods-per-core="0" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369748 4846 flags.go:64] FLAG: --port="10250" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369754 4846 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369760 4846 flags.go:64] FLAG: --provider-id="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369767 4846 flags.go:64] FLAG: --qos-reserved="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369773 4846 flags.go:64] FLAG: --read-only-port="10255" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369779 4846 flags.go:64] FLAG: --register-node="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369785 4846 flags.go:64] FLAG: --register-schedulable="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369791 4846 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369803 4846 flags.go:64] FLAG: --registry-burst="10" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369809 4846 flags.go:64] FLAG: --registry-qps="5" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369815 4846 flags.go:64] FLAG: --reserved-cpus="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369821 4846 flags.go:64] FLAG: --reserved-memory="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369830 4846 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369837 4846 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369843 4846 flags.go:64] FLAG: --rotate-certificates="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369849 4846 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369856 4846 flags.go:64] FLAG: --runonce="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369861 4846 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369867 4846 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369875 4846 flags.go:64] FLAG: --seccomp-default="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369880 4846 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369887 4846 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369893 4846 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369900 4846 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369906 4846 flags.go:64] FLAG: --storage-driver-password="root" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369912 4846 flags.go:64] FLAG: --storage-driver-secure="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369919 4846 flags.go:64] FLAG: --storage-driver-table="stats" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369924 4846 flags.go:64] FLAG: --storage-driver-user="root" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369930 4846 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369938 4846 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369944 4846 flags.go:64] FLAG: --system-cgroups="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369950 4846 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369960 4846 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369967 4846 flags.go:64] FLAG: --tls-cert-file="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369973 4846 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369981 4846 flags.go:64] FLAG: --tls-min-version="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369987 4846 flags.go:64] FLAG: --tls-private-key-file="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369993 4846 flags.go:64] FLAG: --topology-manager-policy="none" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.369999 4846 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370005 4846 flags.go:64] FLAG: --topology-manager-scope="container" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370011 4846 flags.go:64] FLAG: --v="2" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370020 4846 flags.go:64] FLAG: --version="false" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370028 4846 flags.go:64] FLAG: --vmodule="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370036 4846 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370042 4846 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370238 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370247 4846 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370253 4846 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370260 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370266 4846 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370272 4846 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370278 4846 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370283 4846 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370289 4846 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370294 4846 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370300 4846 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370305 4846 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370311 4846 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370316 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370321 4846 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370327 4846 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370333 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370338 4846 feature_gate.go:330] unrecognized feature gate: Example Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370345 4846 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370352 4846 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370360 4846 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370365 4846 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370371 4846 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370376 4846 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370382 4846 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370387 4846 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370392 4846 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370397 4846 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370402 4846 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370407 4846 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370412 4846 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370418 4846 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370423 4846 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370429 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370434 4846 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370439 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370444 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370450 4846 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370457 4846 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370463 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370470 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370476 4846 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370482 4846 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370488 4846 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370494 4846 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370499 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370504 4846 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370510 4846 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370534 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370556 4846 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370564 4846 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370571 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370579 4846 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370586 4846 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370592 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370599 4846 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370608 4846 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370615 4846 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370621 4846 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370628 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370634 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370639 4846 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370645 4846 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370651 4846 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370656 4846 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370670 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370676 4846 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370681 4846 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370706 4846 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370712 4846 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.370717 4846 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.370735 4846 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.389030 4846 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.389100 4846 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389234 4846 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389249 4846 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389259 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389267 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389277 4846 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389285 4846 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389293 4846 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389302 4846 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389310 4846 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389318 4846 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389326 4846 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389333 4846 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389341 4846 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389348 4846 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389357 4846 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389365 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389376 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389385 4846 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389393 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389403 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389411 4846 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389419 4846 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389427 4846 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389435 4846 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389443 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389451 4846 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389461 4846 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389473 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389483 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389491 4846 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389500 4846 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389509 4846 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389518 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389526 4846 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389534 4846 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389542 4846 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389549 4846 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389560 4846 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389570 4846 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389579 4846 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389588 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389597 4846 feature_gate.go:330] unrecognized feature gate: Example Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389606 4846 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389614 4846 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389622 4846 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389630 4846 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389638 4846 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389645 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389653 4846 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389660 4846 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389669 4846 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389677 4846 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389725 4846 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389735 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389743 4846 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389751 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389760 4846 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389767 4846 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389778 4846 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389791 4846 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389802 4846 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389815 4846 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389825 4846 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389835 4846 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389845 4846 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389854 4846 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389862 4846 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389870 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389877 4846 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389885 4846 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.389893 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.389906 4846 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390148 4846 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390164 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390174 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390182 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390191 4846 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390200 4846 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390208 4846 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390216 4846 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390224 4846 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390232 4846 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390240 4846 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390251 4846 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390262 4846 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390271 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390279 4846 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390288 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390295 4846 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390304 4846 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390311 4846 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390319 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390328 4846 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390335 4846 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390343 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390350 4846 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390358 4846 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390366 4846 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390373 4846 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390381 4846 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390388 4846 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390396 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390404 4846 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390411 4846 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390419 4846 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390426 4846 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390433 4846 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390441 4846 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390449 4846 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390457 4846 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390464 4846 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390472 4846 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390482 4846 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390492 4846 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390501 4846 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390510 4846 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390519 4846 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390529 4846 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390538 4846 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390583 4846 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390592 4846 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390601 4846 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390609 4846 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390618 4846 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390628 4846 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390637 4846 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390645 4846 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390653 4846 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390660 4846 feature_gate.go:330] unrecognized feature gate: Example Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390669 4846 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390676 4846 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390711 4846 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390719 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390727 4846 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390735 4846 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390743 4846 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390752 4846 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390760 4846 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390768 4846 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390776 4846 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390786 4846 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390796 4846 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.390807 4846 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.390823 4846 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.391513 4846 server.go:940] "Client rotation is on, will bootstrap in background" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.396605 4846 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.396808 4846 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.397828 4846 server.go:997] "Starting client certificate rotation" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.397872 4846 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.398111 4846 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 10:39:23.708286036 +0000 UTC Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.398303 4846 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 442h33m4.309990477s for next certificate rotation Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.405509 4846 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.408450 4846 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.417918 4846 log.go:25] "Validated CRI v1 runtime API" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.434539 4846 log.go:25] "Validated CRI v1 image API" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.436628 4846 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.441843 4846 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-01-00-02-04-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.441905 4846 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.459860 4846 manager.go:217] Machine: {Timestamp:2025-12-01 00:06:19.458118935 +0000 UTC m=+0.238888039 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2d73afc2-2e69-417d-b195-29982d0d72a9 BootID:6988692f-f9e5-459a-a6c8-c307d43c0948 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e5:ea:0b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e5:ea:0b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b4:07:d2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:9d:7d:b7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f8:af:bb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a8:0d:10 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9e:f9:38:ac:1b:2a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:29:08:17:37:d6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.460137 4846 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.460362 4846 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.463136 4846 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.463812 4846 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.463864 4846 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.464144 4846 topology_manager.go:138] "Creating topology manager with none policy" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.464163 4846 container_manager_linux.go:303] "Creating device plugin manager" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.464902 4846 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.464943 4846 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.465291 4846 state_mem.go:36] "Initialized new in-memory state store" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.465438 4846 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.466258 4846 kubelet.go:418] "Attempting to sync node with API server" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.466285 4846 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.466321 4846 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.466339 4846 kubelet.go:324] "Adding apiserver pod source" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.466356 4846 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.468717 4846 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.469145 4846 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470110 4846 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.470279 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.470280 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.470544 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.470599 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470865 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470897 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470910 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470920 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470935 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470945 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470954 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470969 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470979 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.470989 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.471002 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.471012 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.471384 4846 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.472004 4846 server.go:1280] "Started kubelet" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.472445 4846 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.472453 4846 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.472665 4846 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.473231 4846 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 01 00:06:19 crc systemd[1]: Started Kubernetes Kubelet. Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.475618 4846 server.go:460] "Adding debug handlers to kubelet server" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.476532 4846 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.476571 4846 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.476801 4846 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:00:28.69449715 +0000 UTC Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.476836 4846 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 994h54m9.217662535s for next certificate rotation Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.475860 4846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187ceeacafb35abf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 00:06:19.471944383 +0000 UTC m=+0.252713477,LastTimestamp:2025-12-01 00:06:19.471944383 +0000 UTC m=+0.252713477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.477143 4846 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.477153 4846 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.477256 4846 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.477252 4846 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.477718 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.478292 4846 factory.go:55] Registering systemd factory Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.478336 4846 factory.go:221] Registration of the systemd container factory successfully Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.479216 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.479401 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.480783 4846 factory.go:153] Registering CRI-O factory Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.480833 4846 factory.go:221] Registration of the crio container factory successfully Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.480975 4846 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.481019 4846 factory.go:103] Registering Raw factory Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.481052 4846 manager.go:1196] Started watching for new ooms in manager Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.482169 4846 manager.go:319] Starting recovery of all containers Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.499341 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.499893 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.499922 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.499941 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.499963 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.499985 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500003 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500022 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500044 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500064 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500082 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500100 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500119 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500171 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500189 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500209 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500229 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500247 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500265 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500283 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500300 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500320 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500338 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500356 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500377 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500396 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500419 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500441 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500467 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500485 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500504 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500547 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500568 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500586 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500628 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500667 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500720 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500750 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500774 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500800 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500822 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500841 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500859 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500880 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500898 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500917 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500935 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500956 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.500985 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501011 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501038 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501062 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501094 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501117 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501138 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501159 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501179 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501205 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501226 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501244 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501262 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501280 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501299 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501316 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501337 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501355 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501373 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501393 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501422 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501447 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501473 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501498 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501524 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501547 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501571 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501596 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501618 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501643 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501671 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501732 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501762 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501788 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501811 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501835 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501858 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501884 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501912 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501940 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501966 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.501993 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502017 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502043 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502066 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502090 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502119 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502145 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502173 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502200 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502224 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502249 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502276 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502300 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502328 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502353 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502388 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502417 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502443 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502470 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502495 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502524 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502552 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502578 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502605 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502632 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502657 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502742 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502774 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502799 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502822 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502846 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502867 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502890 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502932 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502957 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.502982 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503007 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503029 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503051 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503075 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503098 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503121 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503143 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503173 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503196 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503221 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503244 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503268 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503293 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503318 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503341 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503375 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503400 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503424 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503448 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503486 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503513 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503537 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503559 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503582 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503605 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503630 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503655 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503679 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503782 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503806 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503829 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503852 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503875 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503899 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503922 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503948 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503972 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.503995 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504019 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504043 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504066 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504092 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504115 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504138 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504162 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504187 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504209 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504230 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504253 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504275 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504296 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504323 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504344 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.504368 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.507164 4846 manager.go:324] Recovery completed Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547711 4846 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547783 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547802 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547817 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547833 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547846 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547862 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547875 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547889 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547901 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547915 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547926 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547939 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547952 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547965 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547978 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.547991 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548003 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548016 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548030 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548043 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548055 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548067 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548079 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548090 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548103 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548114 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548125 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548138 4846 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548151 4846 reconstruct.go:97] "Volume reconstruction finished" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.548160 4846 reconciler.go:26] "Reconciler: start to sync state" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.557541 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.559556 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.559640 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.559665 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.562027 4846 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.562046 4846 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.562067 4846 state_mem.go:36] "Initialized new in-memory state store" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.573764 4846 policy_none.go:49] "None policy: Start" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.574554 4846 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.574582 4846 state_mem.go:35] "Initializing new in-memory state store" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.576867 4846 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.577536 4846 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.579068 4846 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.579123 4846 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.579149 4846 kubelet.go:2335] "Starting kubelet main sync loop" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.579203 4846 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 01 00:06:19 crc kubenswrapper[4846]: W1201 00:06:19.580032 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.580097 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.625939 4846 manager.go:334] "Starting Device Plugin manager" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.627917 4846 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.627952 4846 server.go:79] "Starting device plugin registration server" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.628479 4846 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.628502 4846 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.628711 4846 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.628806 4846 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.628818 4846 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.635738 4846 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.679235 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.679284 4846 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.679352 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.680271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.680298 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.680308 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.680424 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.680609 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.680636 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681119 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681131 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681299 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681357 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681410 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.681439 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682121 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682162 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682265 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682305 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682331 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682367 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.682396 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683052 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683082 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683129 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683150 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683362 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683437 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.683463 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.684190 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.684208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.684218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685045 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685080 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685169 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685186 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685708 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685769 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.685783 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.729002 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.730467 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.730519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.730539 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.730579 4846 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.731257 4846 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852024 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852096 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852139 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852168 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852194 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852220 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852313 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852403 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852432 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852481 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852518 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852564 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852624 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852650 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.852741 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.932058 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.936565 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.936599 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.936607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.936631 4846 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 00:06:19 crc kubenswrapper[4846]: E1201 00:06:19.937279 4846 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954654 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954755 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954801 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954851 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954904 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954920 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954951 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954972 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955017 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955021 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.954918 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955053 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955091 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955105 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955135 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955165 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955181 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955196 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955221 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955227 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955277 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955302 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955314 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955332 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955350 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955337 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955370 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955372 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955393 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:19 crc kubenswrapper[4846]: I1201 00:06:19.955432 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.007968 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.029294 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.030749 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-1e18cc3b76117cf1d61d0d0a8b3a4d78b02d5bc0f9ebda85ab3575485e87897b WatchSource:0}: Error finding container 1e18cc3b76117cf1d61d0d0a8b3a4d78b02d5bc0f9ebda85ab3575485e87897b: Status 404 returned error can't find the container with id 1e18cc3b76117cf1d61d0d0a8b3a4d78b02d5bc0f9ebda85ab3575485e87897b Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.039388 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.050618 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-218d5d3584f9a4944f30456482eb4ea12eaa2a9dce9bc98193b7fe0211016907 WatchSource:0}: Error finding container 218d5d3584f9a4944f30456482eb4ea12eaa2a9dce9bc98193b7fe0211016907: Status 404 returned error can't find the container with id 218d5d3584f9a4944f30456482eb4ea12eaa2a9dce9bc98193b7fe0211016907 Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.064143 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-da521c8fce2dfbbd17168903d83f3123763dcdb726e10e183548f638a1c995ef WatchSource:0}: Error finding container da521c8fce2dfbbd17168903d83f3123763dcdb726e10e183548f638a1c995ef: Status 404 returned error can't find the container with id da521c8fce2dfbbd17168903d83f3123763dcdb726e10e183548f638a1c995ef Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.066661 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.071433 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:20 crc kubenswrapper[4846]: E1201 00:06:20.080343 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.082843 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f46c881a4d3ec5fd7c40ab421b192e2b96c5146ccc679fbaa877206550f900ca WatchSource:0}: Error finding container f46c881a4d3ec5fd7c40ab421b192e2b96c5146ccc679fbaa877206550f900ca: Status 404 returned error can't find the container with id f46c881a4d3ec5fd7c40ab421b192e2b96c5146ccc679fbaa877206550f900ca Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.085644 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-bfe1a8f3e766b1310c2ccdf6f2103ecdf4f362a7f38b0f1e85989d2d7cb9a786 WatchSource:0}: Error finding container bfe1a8f3e766b1310c2ccdf6f2103ecdf4f362a7f38b0f1e85989d2d7cb9a786: Status 404 returned error can't find the container with id bfe1a8f3e766b1310c2ccdf6f2103ecdf4f362a7f38b0f1e85989d2d7cb9a786 Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.337416 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.339777 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.339853 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.339889 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.339928 4846 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 00:06:20 crc kubenswrapper[4846]: E1201 00:06:20.340555 4846 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.354246 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:20 crc kubenswrapper[4846]: E1201 00:06:20.354380 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.431984 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:20 crc kubenswrapper[4846]: E1201 00:06:20.432087 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.474219 4846 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.584904 4846 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441" exitCode=0 Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.584988 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.585061 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bfe1a8f3e766b1310c2ccdf6f2103ecdf4f362a7f38b0f1e85989d2d7cb9a786"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.585144 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.585904 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.585931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.585940 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.587191 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.587242 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f46c881a4d3ec5fd7c40ab421b192e2b96c5146ccc679fbaa877206550f900ca"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.589638 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5" exitCode=0 Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.589719 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.589743 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da521c8fce2dfbbd17168903d83f3123763dcdb726e10e183548f638a1c995ef"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.589842 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.590941 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.590972 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.590984 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.592539 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.592915 4846 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="84cff277f2534205b1b1c0132c86d9500766ff160fd760c34d692bb11dd1480d" exitCode=0 Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.592981 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"84cff277f2534205b1b1c0132c86d9500766ff160fd760c34d692bb11dd1480d"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.593024 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"218d5d3584f9a4944f30456482eb4ea12eaa2a9dce9bc98193b7fe0211016907"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.593125 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.593538 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.593587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.593596 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.594394 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.594418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.594434 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.595284 4846 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="91764e7a1f8fb95f29cc1491a21892a149546265fa32024b939f64839d1fd8d6" exitCode=0 Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.595343 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"91764e7a1f8fb95f29cc1491a21892a149546265fa32024b939f64839d1fd8d6"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.595384 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1e18cc3b76117cf1d61d0d0a8b3a4d78b02d5bc0f9ebda85ab3575485e87897b"} Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.595497 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.596356 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.596391 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:20 crc kubenswrapper[4846]: I1201 00:06:20.596404 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:20 crc kubenswrapper[4846]: W1201 00:06:20.857367 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:20 crc kubenswrapper[4846]: E1201 00:06:20.857435 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:20 crc kubenswrapper[4846]: E1201 00:06:20.881954 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.140840 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.143794 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.143851 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.143862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.143900 4846 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 00:06:21 crc kubenswrapper[4846]: E1201 00:06:21.144531 4846 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Dec 01 00:06:21 crc kubenswrapper[4846]: W1201 00:06:21.155517 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Dec 01 00:06:21 crc kubenswrapper[4846]: E1201 00:06:21.155616 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.601757 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.601813 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.601828 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.609380 4846 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f9b36704ddd308d36586f0365df63e2686f2df761ca47b0f68e1256e4ebba830" exitCode=0 Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.609464 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f9b36704ddd308d36586f0365df63e2686f2df761ca47b0f68e1256e4ebba830"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.609899 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.611042 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.611098 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.611108 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.613498 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"52727c3430d9acfe9314b34d4098a8edf09b0adbb777284b5cff67502fc9d18a"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.613677 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.614811 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.614859 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.614891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.615904 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.615942 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.615956 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.616351 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.617323 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.617371 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.617383 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.620425 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.620481 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.620496 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90"} Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.620613 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.621925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.621949 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.621958 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:21 crc kubenswrapper[4846]: I1201 00:06:21.705725 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.628811 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915"} Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.628860 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9"} Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.628975 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.630057 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.630090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.630103 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.631136 4846 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0c7b2e68b6e6341a368c5c6058f283428489f545d561dc9939ef3105d7bd0685" exitCode=0 Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.631198 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0c7b2e68b6e6341a368c5c6058f283428489f545d561dc9939ef3105d7bd0685"} Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.631259 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.631331 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.632101 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.632126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.632134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.632961 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.632981 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.632989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.745657 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.747311 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.747366 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.747381 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:22 crc kubenswrapper[4846]: I1201 00:06:22.747414 4846 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637630 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"838a1002417f130ce2214e32c34dc7d44d677df2fda94ea38d4e9cce126b8da6"} Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637742 4846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637752 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a900eba423160ba74522113928368c9559e4a65508093fbbbcab6e5b60188437"} Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637792 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6844fe574b684796aeb5c8b801fe3e32ca90588a2054ebe60f78858942fb0912"} Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637813 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637816 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af9a1c5d14ee062fb3acc5e24f73e804f46f4d15328b799793786cfbc5795580"} Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637837 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4dd99863112ab7429e6d93e787c8d177aec97fb76369bff21baf27cd7e99f10"} Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637873 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.637935 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639214 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639228 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639233 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639256 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639439 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639491 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:23 crc kubenswrapper[4846]: I1201 00:06:23.639518 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.642362 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.644397 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.644445 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.644463 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.655307 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.655454 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.656456 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.656501 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:24 crc kubenswrapper[4846]: I1201 00:06:24.656520 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:26 crc kubenswrapper[4846]: I1201 00:06:26.593483 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:26 crc kubenswrapper[4846]: I1201 00:06:26.593721 4846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 00:06:26 crc kubenswrapper[4846]: I1201 00:06:26.593775 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:26 crc kubenswrapper[4846]: I1201 00:06:26.595156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:26 crc kubenswrapper[4846]: I1201 00:06:26.595219 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:26 crc kubenswrapper[4846]: I1201 00:06:26.595232 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.094322 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.094495 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.095989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.096058 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.096070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.129755 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.129941 4846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.129988 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.131241 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.131325 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.131336 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.328641 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.328917 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.330372 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.330435 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:27 crc kubenswrapper[4846]: I1201 00:06:27.330455 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:28 crc kubenswrapper[4846]: I1201 00:06:28.354117 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:28 crc kubenswrapper[4846]: I1201 00:06:28.354337 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:28 crc kubenswrapper[4846]: I1201 00:06:28.355766 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:28 crc kubenswrapper[4846]: I1201 00:06:28.355803 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:28 crc kubenswrapper[4846]: I1201 00:06:28.355815 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.070831 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.071154 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.073255 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.073333 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.073346 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.077219 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.599322 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.599565 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.601200 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.601318 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.601347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:29 crc kubenswrapper[4846]: E1201 00:06:29.635960 4846 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.657285 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.658286 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.658327 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:29 crc kubenswrapper[4846]: I1201 00:06:29.658339 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:30 crc kubenswrapper[4846]: I1201 00:06:30.094572 4846 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 01 00:06:30 crc kubenswrapper[4846]: I1201 00:06:30.094753 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 01 00:06:31 crc kubenswrapper[4846]: I1201 00:06:31.474364 4846 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 01 00:06:31 crc kubenswrapper[4846]: I1201 00:06:31.710965 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:31 crc kubenswrapper[4846]: I1201 00:06:31.711153 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:31 crc kubenswrapper[4846]: I1201 00:06:31.712420 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:31 crc kubenswrapper[4846]: I1201 00:06:31.712461 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:31 crc kubenswrapper[4846]: I1201 00:06:31.712471 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:32 crc kubenswrapper[4846]: W1201 00:06:32.402037 4846 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 01 00:06:32 crc kubenswrapper[4846]: I1201 00:06:32.402208 4846 trace.go:236] Trace[1655823891]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 00:06:22.400) (total time: 10001ms): Dec 01 00:06:32 crc kubenswrapper[4846]: Trace[1655823891]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:32.402) Dec 01 00:06:32 crc kubenswrapper[4846]: Trace[1655823891]: [10.001764745s] [10.001764745s] END Dec 01 00:06:32 crc kubenswrapper[4846]: E1201 00:06:32.402261 4846 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 01 00:06:32 crc kubenswrapper[4846]: E1201 00:06:32.482776 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 01 00:06:32 crc kubenswrapper[4846]: I1201 00:06:32.526478 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 00:06:32 crc kubenswrapper[4846]: I1201 00:06:32.526535 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 00:06:32 crc kubenswrapper[4846]: I1201 00:06:32.532158 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 01 00:06:32 crc kubenswrapper[4846]: I1201 00:06:32.532222 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.265364 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.265598 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.267276 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.267320 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.267338 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.322785 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.668971 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.669787 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.669821 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.669831 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:33 crc kubenswrapper[4846]: I1201 00:06:33.683153 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 01 00:06:34 crc kubenswrapper[4846]: I1201 00:06:34.672491 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:34 crc kubenswrapper[4846]: I1201 00:06:34.673911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:34 crc kubenswrapper[4846]: I1201 00:06:34.673972 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:34 crc kubenswrapper[4846]: I1201 00:06:34.673988 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.140179 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.140479 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.142391 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.142463 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.142485 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.147751 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.528305 4846 trace.go:236] Trace[1864359517]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 00:06:22.791) (total time: 14736ms): Dec 01 00:06:37 crc kubenswrapper[4846]: Trace[1864359517]: ---"Objects listed" error: 14736ms (00:06:37.528) Dec 01 00:06:37 crc kubenswrapper[4846]: Trace[1864359517]: [14.736310405s] [14.736310405s] END Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.528366 4846 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.529793 4846 trace.go:236] Trace[197021583]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 00:06:24.166) (total time: 13362ms): Dec 01 00:06:37 crc kubenswrapper[4846]: Trace[197021583]: ---"Objects listed" error: 13362ms (00:06:37.529) Dec 01 00:06:37 crc kubenswrapper[4846]: Trace[197021583]: [13.36288462s] [13.36288462s] END Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.529836 4846 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 00:06:37 crc kubenswrapper[4846]: E1201 00:06:37.531210 4846 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.531523 4846 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.532271 4846 trace.go:236] Trace[1132746237]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Dec-2025 00:06:23.113) (total time: 14418ms): Dec 01 00:06:37 crc kubenswrapper[4846]: Trace[1132746237]: ---"Objects listed" error: 14418ms (00:06:37.532) Dec 01 00:06:37 crc kubenswrapper[4846]: Trace[1132746237]: [14.418337986s] [14.418337986s] END Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.532310 4846 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.590325 4846 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.617492 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.627199 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:37 crc kubenswrapper[4846]: E1201 00:06:37.687489 4846 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.710019 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.710055 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.710089 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.710137 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.711068 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 00:06:37 crc kubenswrapper[4846]: I1201 00:06:37.711097 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.354845 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.354953 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.477288 4846 apiserver.go:52] "Watching apiserver" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.481290 4846 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.481924 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.482530 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.482656 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.482723 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.482929 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.483039 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.483070 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.483538 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.484401 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.484259 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.487365 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.487893 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.487921 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.488011 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.488114 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.488926 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.489088 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.489722 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.490651 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.524639 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.544785 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.562776 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.578805 4846 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.583606 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.599508 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.613647 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.625993 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638466 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638621 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638709 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638746 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638776 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638806 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638839 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638868 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.638914 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639233 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639281 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639425 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639416 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639469 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639510 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639545 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639584 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639649 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639637 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639788 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639823 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639855 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639851 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639883 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639937 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639962 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.639991 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640003 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640015 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640041 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640067 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640096 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640121 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640086 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640145 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640169 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640196 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640222 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640247 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640274 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640299 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640323 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640347 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640370 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640396 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640422 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640446 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640471 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640494 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640518 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640546 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640573 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640598 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640623 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640648 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640673 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640722 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640748 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640771 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640797 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640826 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640853 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640981 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641010 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641037 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641062 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641086 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641150 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641180 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641208 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641236 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641262 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641287 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641317 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641345 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641369 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641393 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641418 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641440 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641466 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641488 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641511 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641553 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641582 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641610 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641670 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641715 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641742 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641768 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641793 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641817 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641850 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641884 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641912 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641939 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641969 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641996 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642023 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642051 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642077 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642105 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642130 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642154 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642179 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642204 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642230 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642265 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642292 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642315 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642339 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642365 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642390 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642416 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642444 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642470 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642498 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642522 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642548 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642576 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642608 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642633 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642659 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642705 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642732 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642758 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642784 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642809 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642836 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642870 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642895 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642919 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642944 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642969 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642995 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643018 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643042 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643065 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643088 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643112 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643138 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643162 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643187 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643212 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643240 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643264 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643294 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643325 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643354 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643379 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643405 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643430 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643453 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649399 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649497 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649533 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649561 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649584 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649609 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649643 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649669 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649710 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649737 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649763 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649785 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649821 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649849 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649878 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649906 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649935 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649964 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649992 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650019 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650044 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650062 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650085 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650107 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650130 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650151 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650171 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650192 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650213 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650235 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650258 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650280 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650300 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650322 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650344 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650362 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650385 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650407 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650425 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650445 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650466 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650487 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650505 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650527 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650550 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650568 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650594 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650617 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650640 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650661 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640166 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640252 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640500 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640527 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640776 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640796 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.640908 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641114 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641134 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641201 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641377 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641523 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641593 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641864 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641878 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.641911 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642090 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642157 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642393 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642421 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642556 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642644 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642666 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.642865 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.643359 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.647023 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.648868 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.648805 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.648905 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649192 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649352 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649370 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.649936 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650182 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650217 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650580 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.651273 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.650896 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.651323 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.651746 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.651946 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.652086 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.652435 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.652459 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.652520 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.652719 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653114 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653271 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653264 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653754 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653791 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653810 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653841 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.653995 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.654094 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.654354 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.654657 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.654826 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.654881 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.654940 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655110 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655176 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655384 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655636 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655672 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655691 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655737 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655847 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.656045 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.655844 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.656484 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.656871 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.656867 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.657292 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.657306 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.657376 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.652573 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.657867 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.657911 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658295 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658374 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658421 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658849 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658955 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.659983 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660044 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660094 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660146 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660680 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660755 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660817 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660842 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660864 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.660972 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.662849 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.663176 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.663291 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.663326 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.663621 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664079 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664234 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664603 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664580 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664798 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664829 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664926 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.664944 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665038 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665122 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665132 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665390 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665404 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665601 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.665857 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:06:39.16582767 +0000 UTC m=+19.946596924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.665942 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666230 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666304 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666557 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666776 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666718 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666721 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666879 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.666906 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.667348 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.667423 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.667448 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.667464 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.667869 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668120 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668262 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658443 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668317 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668379 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668464 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668499 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668532 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668546 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668545 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668558 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668560 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668657 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668695 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.668754 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668765 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668813 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.668846 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:39.168821604 +0000 UTC m=+19.949590848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668951 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.668993 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669008 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669051 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669073 4846 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.669091 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669088 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.669157 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:39.169140373 +0000 UTC m=+19.949909637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669280 4846 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669298 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669314 4846 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669332 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669347 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669359 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669373 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669389 4846 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669403 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669418 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669432 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669448 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669459 4846 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669469 4846 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669478 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669488 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669500 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669512 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669522 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669532 4846 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669541 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669550 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669560 4846 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669573 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669585 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669598 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669613 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669624 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669633 4846 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669644 4846 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669657 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669670 4846 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669732 4846 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669743 4846 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669755 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669764 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669772 4846 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669781 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669790 4846 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669799 4846 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669809 4846 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669818 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669828 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669839 4846 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669849 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669858 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669867 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669877 4846 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669886 4846 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669895 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669904 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669914 4846 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669924 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669937 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669947 4846 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669957 4846 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669967 4846 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669978 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669988 4846 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669921 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670002 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670085 4846 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670104 4846 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670123 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670139 4846 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670154 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670170 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670188 4846 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670202 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670216 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670229 4846 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670243 4846 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670255 4846 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670269 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670283 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670298 4846 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670304 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670325 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670338 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670350 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670370 4846 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670384 4846 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670397 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670412 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670425 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670445 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670459 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670474 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670489 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670502 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670514 4846 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670527 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670541 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670556 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670568 4846 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670581 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670594 4846 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670607 4846 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670619 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670631 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670644 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670656 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670668 4846 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670715 4846 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670738 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670751 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670763 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670776 4846 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670788 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670800 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670813 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670828 4846 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670841 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670872 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670896 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670913 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670929 4846 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670946 4846 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670961 4846 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.670979 4846 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671001 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671018 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671035 4846 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671051 4846 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671068 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671104 4846 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671121 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671135 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671147 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671161 4846 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671175 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671187 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671201 4846 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671215 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671227 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671241 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671259 4846 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671271 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671284 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671298 4846 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671298 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671321 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671346 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671369 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671395 4846 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.669920 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671771 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671808 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671913 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.671940 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.672160 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.672762 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.658800 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.673158 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.673592 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.673968 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.673990 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.674008 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.674060 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.674177 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.674264 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.674460 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.675060 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.675286 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.683738 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.691202 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.691883 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.691908 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.691926 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.692011 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:39.191985556 +0000 UTC m=+19.972754710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.691645 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.692363 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.692487 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.692493 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.692551 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.692591 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.692610 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.692603 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: E1201 00:06:38.692743 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:39.192712049 +0000 UTC m=+19.973481123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.692825 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.692818 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.693206 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.694055 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.694058 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.694227 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.694420 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.695210 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.695599 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.695696 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.696642 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.697769 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.700836 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.701791 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.703438 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915" exitCode=255 Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.703972 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.704064 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915"} Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.704396 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.707326 4846 scope.go:117] "RemoveContainer" containerID="cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.707669 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.707747 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.707779 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.708217 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.710108 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.714838 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.717993 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.719779 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.719842 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.720050 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.720464 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.720981 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.721005 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.721386 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.721553 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.724073 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.725545 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.732217 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.734631 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.745128 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.752222 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.753971 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.754064 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.764450 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772114 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772169 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772208 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772220 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772233 4846 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772256 4846 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772265 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772274 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772283 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772274 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772291 4846 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772417 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772503 4846 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772529 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772542 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772567 4846 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772579 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772594 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772608 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772621 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772633 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772646 4846 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772658 4846 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772670 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772696 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772708 4846 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772720 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772731 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772742 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772752 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772763 4846 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772774 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772783 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772794 4846 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772805 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772814 4846 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772824 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772834 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772844 4846 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772853 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772864 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772876 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772887 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772898 4846 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772908 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772923 4846 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772935 4846 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772948 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772959 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772971 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772984 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.772997 4846 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.773012 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.773024 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.773039 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.773053 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.773064 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.773658 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.784125 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.793965 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.801307 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.807419 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.808939 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.823817 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 01 00:06:38 crc kubenswrapper[4846]: I1201 00:06:38.834875 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.176605 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.176831 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.176929 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.177360 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.177494 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.177480 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:06:40.177426049 +0000 UTC m=+20.958195153 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.177589 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:40.177573633 +0000 UTC m=+20.958342847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.177625 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:40.177612114 +0000 UTC m=+20.958381328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.277997 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.278123 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278250 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278304 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278327 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278328 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278363 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278385 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278421 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:40.27839485 +0000 UTC m=+21.059163934 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:39 crc kubenswrapper[4846]: E1201 00:06:39.278459 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:40.278436592 +0000 UTC m=+21.059205706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.584716 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.585922 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.587961 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.589189 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.590487 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.591198 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.591950 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.593142 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.593966 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.595350 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.595854 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.596076 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.597572 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.598292 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.598953 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.600166 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.600877 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.602038 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.602466 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.603324 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.604711 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.605315 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.606624 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.607206 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.608584 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.609170 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.610100 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.610221 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.612062 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.612671 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.613862 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.614447 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.615758 4846 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.615884 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.617968 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.619095 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.619637 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.621771 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.623026 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.623777 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.624271 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.625305 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.627521 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.628137 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.629490 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.630328 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.631630 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.635318 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.636071 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.637299 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.640000 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.641846 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.642505 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.643877 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.643957 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.645002 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.645989 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.647211 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.663807 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.691742 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.707025 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.708348 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.708418 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"abff62abb104c9137f3e459c36d2917a0d1061f9dfc73a2a157a41b20d22a4db"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.711013 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.712787 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.713502 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.714449 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ddb37fa2755f5decdb088ee5aae8702e8fbc21b64447071d6d8cb05de50c7317"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.719086 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.719134 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.719152 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c3c5c6c4e86adf898a1f0e4ddce79a6d0c3869d2abeaa7fb0c0ea939eafb475c"} Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.721449 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.734169 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.746450 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.758063 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.769447 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.784402 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.797270 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.810045 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:39 crc kubenswrapper[4846]: I1201 00:06:39.829714 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.184853 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.184984 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.185061 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.185309 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.185411 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.185317 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:06:42.185209955 +0000 UTC m=+22.965979029 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.185499 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:42.185488493 +0000 UTC m=+22.966257777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.185548 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:42.185512724 +0000 UTC m=+22.966281828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.286023 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.286073 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286228 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286244 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286243 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286289 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286301 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286256 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286359 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:42.286339561 +0000 UTC m=+23.067108635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.286449 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:42.286433894 +0000 UTC m=+23.067202968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.580377 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.580379 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.580536 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.580597 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.580672 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.580754 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.732060 4846 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.733944 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.733985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.734000 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.734067 4846 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.742675 4846 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.743068 4846 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.744377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.744417 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.744431 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.744448 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.744461 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.771955 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.777700 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.777761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.777773 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.777794 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.777810 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.796040 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.800633 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.800766 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.800792 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.800823 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.800847 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.820829 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.826137 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.826187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.826206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.826232 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.826249 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.842908 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.848426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.848502 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.848545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.848581 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.848604 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.865328 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:40Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:40 crc kubenswrapper[4846]: E1201 00:06:40.865491 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.868103 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.868151 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.868170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.868192 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.868203 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.970671 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.970737 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.970746 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.970765 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:40 crc kubenswrapper[4846]: I1201 00:06:40.970781 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:40Z","lastTransitionTime":"2025-12-01T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.073900 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.073957 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.073969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.073987 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.073999 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.176906 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.176944 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.176956 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.176975 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.176988 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.279894 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.279963 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.279983 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.280005 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.280019 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.386962 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.387034 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.387060 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.387094 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.387117 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.489506 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.489552 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.489565 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.489584 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.489597 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.591992 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.592114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.592156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.592182 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.592197 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.694891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.694955 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.694972 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.694997 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.695013 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.727799 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.747098 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.763470 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.778957 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.808450 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.808515 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.808527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.808545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.808560 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.813339 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.836181 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.852408 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.867476 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.883207 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:41Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.911984 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.912036 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.912048 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.912069 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:41 crc kubenswrapper[4846]: I1201 00:06:41.912086 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:41Z","lastTransitionTime":"2025-12-01T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.014838 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.014909 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.014931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.014966 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.014991 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.117579 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.117646 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.117664 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.117704 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.117723 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.205435 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.205527 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.205560 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.205664 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.205737 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.205660 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.205627038 +0000 UTC m=+26.986396112 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.205818 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.205797103 +0000 UTC m=+26.986566337 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.205836 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.205826964 +0000 UTC m=+26.986596038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.220402 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.220451 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.220466 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.220486 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.220499 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.306187 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.306249 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306381 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306400 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306397 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306457 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306474 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306416 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306624 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.30660359 +0000 UTC m=+27.087372664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.306647 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.306636691 +0000 UTC m=+27.087405765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.323418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.323496 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.323510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.323534 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.323546 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.426167 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.426209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.426217 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.426234 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.426245 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.528885 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.528954 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.528972 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.528998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.529014 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.579576 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.579698 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.579713 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.579793 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.580005 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:42 crc kubenswrapper[4846]: E1201 00:06:42.580223 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.632085 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.632146 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.632172 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.632204 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.632226 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.735192 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.735249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.735264 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.735284 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.735297 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.838089 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.838138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.838151 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.838189 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.838201 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.941249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.941285 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.941296 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.941313 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:42 crc kubenswrapper[4846]: I1201 00:06:42.941327 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:42Z","lastTransitionTime":"2025-12-01T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.044088 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.044143 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.044161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.044185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.044200 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.147140 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.147197 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.147208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.147230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.147243 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.249938 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.249996 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.250006 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.250024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.250035 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.353021 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.353057 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.353070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.353088 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.353100 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.456734 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.456811 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.456838 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.456873 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.456896 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.559497 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.559579 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.559599 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.559630 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.559651 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.663149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.663217 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.663229 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.663249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.663261 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.766509 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.767009 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.767155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.767292 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.767956 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.870940 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.870972 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.870983 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.870998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.871008 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.973674 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.973771 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.973784 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.973802 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:43 crc kubenswrapper[4846]: I1201 00:06:43.973816 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:43Z","lastTransitionTime":"2025-12-01T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.076303 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.076375 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.076392 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.076416 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.076429 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.180164 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.180211 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.180221 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.180239 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.180250 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.282363 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.282407 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.282422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.282439 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.282452 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.385543 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.385589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.385601 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.385619 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.385629 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.487725 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.487761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.487771 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.487789 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.487799 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.579582 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:44 crc kubenswrapper[4846]: E1201 00:06:44.579752 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.580130 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:44 crc kubenswrapper[4846]: E1201 00:06:44.580231 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.580295 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:44 crc kubenswrapper[4846]: E1201 00:06:44.580358 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.590219 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.590251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.590260 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.590274 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.590282 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.693145 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.693173 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.693181 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.693194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.693204 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.796206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.796241 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.796262 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.796280 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.796291 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.899044 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.899079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.899089 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.899109 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:44 crc kubenswrapper[4846]: I1201 00:06:44.899119 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:44Z","lastTransitionTime":"2025-12-01T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.001916 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.001955 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.001967 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.001985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.001997 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.105157 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.105208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.105227 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.105247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.105262 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.107875 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pjv9m"] Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.108304 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.109450 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-grqqg"] Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.109664 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-grsdk"] Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.109943 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.110647 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fpx9q"] Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.110833 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.111587 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.117279 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.117611 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.117665 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.117673 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.117739 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.117848 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.117917 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.117930 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.117930 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.118017 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.117862 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.118832 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.117877 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.118922 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.119162 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.119145 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.119216 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.117863 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.119218 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.117953 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.117959 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.118028 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.120392 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.120443 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.120822 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.121534 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gzjjx"] Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.124067 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.125116 4846 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 01 00:06:45 crc kubenswrapper[4846]: E1201 00:06:45.125256 4846 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.131070 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.132009 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134012 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-netns\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134055 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-ovn\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134082 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-netns\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134120 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-var-lib-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134143 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-conf-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134165 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-daemon-config\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134200 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134222 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-etc-kubernetes\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134244 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-script-lib\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134297 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqx7\" (UniqueName: \"kubernetes.io/projected/b2776496-08ee-4019-83d5-a487629a1c54-kube-api-access-2vqx7\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134334 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-etc-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134357 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134393 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d981647e-2c46-4ad1-afd7-757ef36643f8-rootfs\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134418 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d981647e-2c46-4ad1-afd7-757ef36643f8-mcd-auth-proxy-config\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134442 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpt9\" (UniqueName: \"kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134463 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-node-log\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134486 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134510 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-cnibin\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134532 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-tuning-conf-dir\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134551 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-system-cni-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134568 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-kubelet\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134584 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-systemd-units\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134600 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-systemd\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134615 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-cni-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134633 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-cni-binary-copy\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134659 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a49e025b-7c84-4c37-b84b-269c5c74a9b2-hosts-file\") pod \"node-resolver-pjv9m\" (UID: \"a49e025b-7c84-4c37-b84b-269c5c74a9b2\") " pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134699 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-bin\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134722 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b2776496-08ee-4019-83d5-a487629a1c54-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134745 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-cni-bin\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.134763 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2776496-08ee-4019-83d5-a487629a1c54-cni-binary-copy\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.135481 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-k8s-cni-cncf-io\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.135601 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-multus-certs\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.135703 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d981647e-2c46-4ad1-afd7-757ef36643f8-proxy-tls\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.135826 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-slash\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.135929 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-netd\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136020 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136111 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-cni-multus\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136208 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6grv\" (UniqueName: \"kubernetes.io/projected/a49e025b-7c84-4c37-b84b-269c5c74a9b2-kube-api-access-d6grv\") pod \"node-resolver-pjv9m\" (UID: \"a49e025b-7c84-4c37-b84b-269c5c74a9b2\") " pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136297 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-os-release\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136390 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-socket-dir-parent\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136493 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-ovn-kubernetes\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136591 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.136973 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-system-cni-dir\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137084 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-cnibin\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137201 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-os-release\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137287 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-kubelet\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137375 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-hostroot\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137464 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b86g\" (UniqueName: \"kubernetes.io/projected/d981647e-2c46-4ad1-afd7-757ef36643f8-kube-api-access-6b86g\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137552 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-log-socket\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.137642 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpr7\" (UniqueName: \"kubernetes.io/projected/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-kube-api-access-bhpr7\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.143918 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.157032 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.175917 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.190183 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.206136 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.208057 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.208093 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.208105 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.208125 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.208141 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.223159 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.238870 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239097 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a49e025b-7c84-4c37-b84b-269c5c74a9b2-hosts-file\") pod \"node-resolver-pjv9m\" (UID: \"a49e025b-7c84-4c37-b84b-269c5c74a9b2\") " pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.238917 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a49e025b-7c84-4c37-b84b-269c5c74a9b2-hosts-file\") pod \"node-resolver-pjv9m\" (UID: \"a49e025b-7c84-4c37-b84b-269c5c74a9b2\") " pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239457 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-bin\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239540 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b2776496-08ee-4019-83d5-a487629a1c54-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239621 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-cni-bin\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239728 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2776496-08ee-4019-83d5-a487629a1c54-cni-binary-copy\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239757 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-cni-bin\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239937 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-k8s-cni-cncf-io\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239840 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-k8s-cni-cncf-io\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240132 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-multus-certs\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240213 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d981647e-2c46-4ad1-afd7-757ef36643f8-proxy-tls\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240287 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-slash\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240377 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-netd\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240494 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b2776496-08ee-4019-83d5-a487629a1c54-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240279 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-multus-certs\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240581 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-slash\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240612 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-netd\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240699 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240779 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-cni-multus\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240855 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6grv\" (UniqueName: \"kubernetes.io/projected/a49e025b-7c84-4c37-b84b-269c5c74a9b2-kube-api-access-d6grv\") pod \"node-resolver-pjv9m\" (UID: \"a49e025b-7c84-4c37-b84b-269c5c74a9b2\") " pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.239510 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-bin\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240827 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-cni-multus\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241020 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-system-cni-dir\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241104 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-os-release\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241210 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-socket-dir-parent\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241309 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-os-release\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.240800 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2776496-08ee-4019-83d5-a487629a1c54-cni-binary-copy\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241310 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-ovn-kubernetes\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241367 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241391 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-cnibin\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241418 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-hostroot\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241416 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-socket-dir-parent\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241257 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-system-cni-dir\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241446 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-os-release\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241466 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-kubelet\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241478 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-hostroot\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241492 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b86g\" (UniqueName: \"kubernetes.io/projected/d981647e-2c46-4ad1-afd7-757ef36643f8-kube-api-access-6b86g\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241521 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-log-socket\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241526 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-os-release\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241535 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-var-lib-kubelet\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241558 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-log-socket\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241492 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-cnibin\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241545 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhpr7\" (UniqueName: \"kubernetes.io/projected/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-kube-api-access-bhpr7\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241704 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-netns\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241730 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-ovn\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241745 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-netns\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241763 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-var-lib-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241786 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-conf-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241791 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-var-lib-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241798 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-host-run-netns\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241755 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-netns\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241850 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-conf-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241850 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-ovn\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241811 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-daemon-config\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241930 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241950 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-etc-kubernetes\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241971 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241989 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242008 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-etc-kubernetes\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.241994 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-script-lib\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242038 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242066 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vqx7\" (UniqueName: \"kubernetes.io/projected/b2776496-08ee-4019-83d5-a487629a1c54-kube-api-access-2vqx7\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242108 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-etc-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242131 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpt9\" (UniqueName: \"kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242154 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-etc-openvswitch\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242171 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d981647e-2c46-4ad1-afd7-757ef36643f8-rootfs\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242202 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d981647e-2c46-4ad1-afd7-757ef36643f8-mcd-auth-proxy-config\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242228 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-node-log\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242250 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242273 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d981647e-2c46-4ad1-afd7-757ef36643f8-rootfs\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242277 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-cnibin\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242307 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-cnibin\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242324 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-node-log\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242333 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-systemd\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242358 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-tuning-conf-dir\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242379 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-system-cni-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242388 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-systemd\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242414 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-daemon-config\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242459 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-system-cni-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242489 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-kubelet\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242510 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-systemd-units\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242539 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-cni-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242540 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-kubelet\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242559 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-cni-binary-copy\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242574 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-systemd-units\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242664 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-multus-cni-dir\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242829 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2776496-08ee-4019-83d5-a487629a1c54-tuning-conf-dir\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.242960 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d981647e-2c46-4ad1-afd7-757ef36643f8-mcd-auth-proxy-config\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.243189 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-cni-binary-copy\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.243267 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-ovn-kubernetes\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.246405 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d981647e-2c46-4ad1-afd7-757ef36643f8-proxy-tls\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.255410 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.259873 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhpr7\" (UniqueName: \"kubernetes.io/projected/607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c-kube-api-access-bhpr7\") pod \"multus-gzjjx\" (UID: \"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\") " pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.261911 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b86g\" (UniqueName: \"kubernetes.io/projected/d981647e-2c46-4ad1-afd7-757ef36643f8-kube-api-access-6b86g\") pod \"machine-config-daemon-grqqg\" (UID: \"d981647e-2c46-4ad1-afd7-757ef36643f8\") " pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.262719 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vqx7\" (UniqueName: \"kubernetes.io/projected/b2776496-08ee-4019-83d5-a487629a1c54-kube-api-access-2vqx7\") pod \"multus-additional-cni-plugins-grsdk\" (UID: \"b2776496-08ee-4019-83d5-a487629a1c54\") " pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.263084 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6grv\" (UniqueName: \"kubernetes.io/projected/a49e025b-7c84-4c37-b84b-269c5c74a9b2-kube-api-access-d6grv\") pod \"node-resolver-pjv9m\" (UID: \"a49e025b-7c84-4c37-b84b-269c5c74a9b2\") " pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.269642 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.285622 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.301849 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.311554 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.311626 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.311649 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.311677 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.311714 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.315760 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.333612 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.347948 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.369454 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.390022 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.403521 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.414931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.415248 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.415309 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.415379 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.415506 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.426823 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.430775 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pjv9m" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.442462 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.447342 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.450072 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-grsdk" Dec 01 00:06:45 crc kubenswrapper[4846]: W1201 00:06:45.454393 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd981647e_2c46_4ad1_afd7_757ef36643f8.slice/crio-80852a54f385ed49e972b75a85a8b455a7439da33efe80db4fd94b05d627fd18 WatchSource:0}: Error finding container 80852a54f385ed49e972b75a85a8b455a7439da33efe80db4fd94b05d627fd18: Status 404 returned error can't find the container with id 80852a54f385ed49e972b75a85a8b455a7439da33efe80db4fd94b05d627fd18 Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.463960 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.473495 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gzjjx" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.485852 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.498825 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.518308 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.518430 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.518445 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.518464 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.518476 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.620988 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.621029 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.621039 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.621058 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.621080 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.723376 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.723443 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.723455 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.723475 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.723513 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.741083 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerStarted","Data":"e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.741549 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerStarted","Data":"040717ff853a688361c6c214fdc77fb4b35a711be65e03823e4e7e5540c68ea8"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.742843 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerStarted","Data":"64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.742876 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerStarted","Data":"c8fcdae09a9ab1714eedbf58004cf819342c034f38a586c6c21bdf7e636ee0c9"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.744897 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.745042 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.745388 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"80852a54f385ed49e972b75a85a8b455a7439da33efe80db4fd94b05d627fd18"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.748313 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pjv9m" event={"ID":"a49e025b-7c84-4c37-b84b-269c5c74a9b2","Type":"ContainerStarted","Data":"fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.748354 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pjv9m" event={"ID":"a49e025b-7c84-4c37-b84b-269c5c74a9b2","Type":"ContainerStarted","Data":"380fda79b657fe7b0eb9b3bfa597e0ec1459f9df3f31cd6a6d64e24622b53a03"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.764649 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.784902 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.802080 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.813803 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.827736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.827788 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.827801 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.827819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.827829 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.833920 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.873263 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.907811 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.930519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.930560 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.930568 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.930587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.930597 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:45Z","lastTransitionTime":"2025-12-01T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.932528 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.951804 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.962575 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.968546 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.973307 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-script-lib\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.980968 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:45 crc kubenswrapper[4846]: I1201 00:06:45.997783 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.011338 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.025437 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.038820 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.038852 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.038862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.038879 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.038887 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.043145 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.060130 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.064008 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.074347 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.090313 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.104578 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.123791 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.138940 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.142663 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.142872 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.142947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.143019 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.143090 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.153033 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.166523 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.179137 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.198887 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.215018 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.241059 4846 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.241229 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert podName:358371ac-c594-492b-98ad-0da4bc7d9d16 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.741192801 +0000 UTC m=+27.521961875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert") pod "ovnkube-node-fpx9q" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.241609 4846 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.241773 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides podName:358371ac-c594-492b-98ad-0da4bc7d9d16 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.741739338 +0000 UTC m=+27.522508602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides") pod "ovnkube-node-fpx9q" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.243335 4846 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.243556 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config podName:358371ac-c594-492b-98ad-0da4bc7d9d16 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.743521684 +0000 UTC m=+27.524290978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config") pod "ovnkube-node-fpx9q" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.245699 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.245754 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.245770 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.245792 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.245809 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.253023 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.253291 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:06:54.253248387 +0000 UTC m=+35.034017461 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.253598 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.253748 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.253838 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.253919 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.253937 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:54.253926428 +0000 UTC m=+35.034695692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.254098 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:54.254067262 +0000 UTC m=+35.034836356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.263205 4846 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.287565 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.289382 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.293723 4846 projected.go:194] Error preparing data for projected volume kube-api-access-2xpt9 for pod openshift-ovn-kubernetes/ovnkube-node-fpx9q: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.293931 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9 podName:358371ac-c594-492b-98ad-0da4bc7d9d16 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:46.793899226 +0000 UTC m=+27.574668300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2xpt9" (UniqueName: "kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9") pod "ovnkube-node-fpx9q" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.348562 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.348934 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.348999 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.349066 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.349125 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.354996 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.355057 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355227 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355259 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355274 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355331 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:54.355311543 +0000 UTC m=+35.136080617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355227 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355559 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355630 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.355755 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:06:54.355743516 +0000 UTC m=+35.136512590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.452055 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.452107 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.452120 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.452140 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.452152 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.468188 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.555343 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.555755 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.555905 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.556001 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.556090 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.580001 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.580074 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.580101 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.581151 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.581181 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:46 crc kubenswrapper[4846]: E1201 00:06:46.581377 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.659000 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.659035 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.659047 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.659065 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.659076 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.707939 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.709111 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.755002 4846 generic.go:334] "Generic (PLEG): container finished" podID="b2776496-08ee-4019-83d5-a487629a1c54" containerID="64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f" exitCode=0 Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.755104 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerDied","Data":"64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764400 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764490 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764524 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764868 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764971 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.764986 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.765910 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.766415 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.775578 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.782473 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.799267 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.811864 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.825533 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.838351 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.852759 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.866383 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.866767 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpt9\" (UniqueName: \"kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.870580 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.870629 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.870639 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.870663 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.870673 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.872544 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xpt9\" (UniqueName: \"kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9\") pod \"ovnkube-node-fpx9q\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.880225 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.905975 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.919400 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.938095 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.949399 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.962072 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.962917 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:46Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.974090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.974137 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.974149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.974170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:46 crc kubenswrapper[4846]: I1201 00:06:46.974184 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:46Z","lastTransitionTime":"2025-12-01T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:46 crc kubenswrapper[4846]: W1201 00:06:46.978051 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod358371ac_c594_492b_98ad_0da4bc7d9d16.slice/crio-0d493dd81f93c5fbddc72234791d431ef70c361c190160174be22e84a64ecbe1 WatchSource:0}: Error finding container 0d493dd81f93c5fbddc72234791d431ef70c361c190160174be22e84a64ecbe1: Status 404 returned error can't find the container with id 0d493dd81f93c5fbddc72234791d431ef70c361c190160174be22e84a64ecbe1 Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.076845 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.076926 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.076943 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.076969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.076984 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.179908 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.179960 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.179969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.179983 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.179994 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.282881 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.282923 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.282937 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.282955 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.282967 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.386846 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.387289 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.387305 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.387328 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.387343 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.490075 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.490120 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.490128 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.490145 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.490157 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.592637 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.592718 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.592731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.592751 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.592765 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.695797 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.695848 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.695860 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.695878 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.695894 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.746532 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-f9qcg"] Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.749766 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.755494 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.755778 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.755947 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.757695 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.763957 4846 generic.go:334] "Generic (PLEG): container finished" podID="b2776496-08ee-4019-83d5-a487629a1c54" containerID="fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5" exitCode=0 Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.764054 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerDied","Data":"fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.765210 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" exitCode=0 Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.765239 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.765259 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"0d493dd81f93c5fbddc72234791d431ef70c361c190160174be22e84a64ecbe1"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.776619 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zwn2\" (UniqueName: \"kubernetes.io/projected/23a413db-a45d-4559-b7ee-4c4c9b75a24a-kube-api-access-7zwn2\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.776710 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/23a413db-a45d-4559-b7ee-4c4c9b75a24a-serviceca\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.776749 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23a413db-a45d-4559-b7ee-4c4c9b75a24a-host\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.778097 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.800784 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.802282 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.802327 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.802341 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.802362 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.802373 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.817127 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.829131 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.848876 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.877676 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zwn2\" (UniqueName: \"kubernetes.io/projected/23a413db-a45d-4559-b7ee-4c4c9b75a24a-kube-api-access-7zwn2\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.877833 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/23a413db-a45d-4559-b7ee-4c4c9b75a24a-serviceca\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.877869 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23a413db-a45d-4559-b7ee-4c4c9b75a24a-host\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.877938 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/23a413db-a45d-4559-b7ee-4c4c9b75a24a-host\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.879437 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/23a413db-a45d-4559-b7ee-4c4c9b75a24a-serviceca\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.880073 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.900343 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.906518 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.906557 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.906568 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.906587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.906598 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:47Z","lastTransitionTime":"2025-12-01T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.907975 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zwn2\" (UniqueName: \"kubernetes.io/projected/23a413db-a45d-4559-b7ee-4c4c9b75a24a-kube-api-access-7zwn2\") pod \"node-ca-f9qcg\" (UID: \"23a413db-a45d-4559-b7ee-4c4c9b75a24a\") " pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.915052 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.931656 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.950896 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.965372 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.976997 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:47 crc kubenswrapper[4846]: I1201 00:06:47.993630 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:47Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.009723 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.009782 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.009796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.009819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.009834 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.011052 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.027092 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.042958 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.062946 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.070829 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f9qcg" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.092771 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.113294 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.113357 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.113429 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.113457 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.113473 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.116224 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.133801 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.148716 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.165924 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.177452 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.193957 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.211733 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.219021 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.219071 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.219081 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.219102 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.219114 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.224471 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.237411 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.254187 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.343139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.343178 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.343187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.343203 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.343213 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.445588 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.445633 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.445643 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.445662 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.445673 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.548473 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.548544 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.548567 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.548594 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.548612 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.580099 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.580174 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.580290 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:48 crc kubenswrapper[4846]: E1201 00:06:48.580357 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:48 crc kubenswrapper[4846]: E1201 00:06:48.580496 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:48 crc kubenswrapper[4846]: E1201 00:06:48.580669 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.651578 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.651625 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.651655 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.651676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.651712 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.754353 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.754403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.754415 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.754432 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.754443 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.774124 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.774236 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.774255 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.774273 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.774286 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.774297 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.775919 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f9qcg" event={"ID":"23a413db-a45d-4559-b7ee-4c4c9b75a24a","Type":"ContainerStarted","Data":"425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.775946 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f9qcg" event={"ID":"23a413db-a45d-4559-b7ee-4c4c9b75a24a","Type":"ContainerStarted","Data":"fa2be206a428ddd19e9c898967b4add3d25e588641e92944fbea4aacd651b97c"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.779794 4846 generic.go:334] "Generic (PLEG): container finished" podID="b2776496-08ee-4019-83d5-a487629a1c54" containerID="f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888" exitCode=0 Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.779909 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerDied","Data":"f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.795653 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.811413 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.825102 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.841932 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.853645 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.858667 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.858709 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.858718 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.858732 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.858744 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.870201 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.884944 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.905432 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.922482 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.943235 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.956983 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.961395 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.961446 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.961459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.961478 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.961491 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:48Z","lastTransitionTime":"2025-12-01T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.973344 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:48 crc kubenswrapper[4846]: I1201 00:06:48.987204 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:48Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.005230 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.018943 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.035162 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.049613 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.064332 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.064397 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.064411 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.064432 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.064446 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.065835 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.077925 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.094611 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.106956 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.120351 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.133258 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.153915 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.167384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.167432 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.167442 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.167462 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.167432 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.167473 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.186924 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.202723 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.214393 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.270398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.270430 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.270440 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.270456 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.270467 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.373407 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.373468 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.373482 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.373508 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.373527 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.476206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.476269 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.476281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.476303 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.476319 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.578993 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.579039 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.579053 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.579073 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.579086 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.593244 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.610774 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.625954 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.646097 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.659271 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.672561 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.681740 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.681789 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.681807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.681830 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.681848 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.684873 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.698008 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.713973 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.735487 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.750512 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.765786 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.784305 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.784389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.784403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.784423 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.784436 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.785371 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.786363 4846 generic.go:334] "Generic (PLEG): container finished" podID="b2776496-08ee-4019-83d5-a487629a1c54" containerID="e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f" exitCode=0 Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.786418 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerDied","Data":"e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.802573 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.818023 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.836996 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.852944 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.877297 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.886697 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.886757 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.886770 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.886793 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.886807 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.892529 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.907831 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.920286 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.935263 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.949876 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.963698 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.976910 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.989367 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.989449 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.989477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.989497 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.989511 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:49Z","lastTransitionTime":"2025-12-01T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:49 crc kubenswrapper[4846]: I1201 00:06:49.990713 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.013619 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.035238 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.093166 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.093205 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.093216 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.093234 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.093244 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.196247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.196303 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.196316 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.196335 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.196347 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.298897 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.298943 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.298955 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.298973 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.298986 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.401073 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.401125 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.401143 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.401160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.401171 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.504345 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.504386 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.504395 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.504410 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.504421 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.579860 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.579935 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:50 crc kubenswrapper[4846]: E1201 00:06:50.579991 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.580082 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:50 crc kubenswrapper[4846]: E1201 00:06:50.580182 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:50 crc kubenswrapper[4846]: E1201 00:06:50.580329 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.608110 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.608540 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.608553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.608573 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.608586 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.711667 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.711744 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.711759 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.711781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.711794 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.794804 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.800245 4846 generic.go:334] "Generic (PLEG): container finished" podID="b2776496-08ee-4019-83d5-a487629a1c54" containerID="fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db" exitCode=0 Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.800290 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerDied","Data":"fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.814158 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.814199 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.814213 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.814231 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.814244 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.817383 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.835648 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.849129 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.868097 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.885595 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.902441 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.920774 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.923171 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.923211 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.923220 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.923237 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.923250 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:50Z","lastTransitionTime":"2025-12-01T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.936900 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.954278 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.972980 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:50 crc kubenswrapper[4846]: I1201 00:06:50.986268 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:50Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.002194 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.017113 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.035312 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.044747 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.044814 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.044827 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.044851 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.044866 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.147586 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.150097 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.150120 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.150140 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.150151 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: E1201 00:06:51.170290 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.176437 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.176479 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.176490 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.176505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.176516 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: E1201 00:06:51.194730 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.202988 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.203032 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.203045 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.203066 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.203078 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: E1201 00:06:51.216623 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.219783 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.219833 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.219842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.219854 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.219866 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: E1201 00:06:51.233734 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.238294 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.238330 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.238339 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.238359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.238375 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: E1201 00:06:51.252790 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: E1201 00:06:51.252965 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.254880 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.254921 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.254931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.254949 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.254961 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.357726 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.357786 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.357803 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.357837 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.357855 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.461850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.462256 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.462265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.462283 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.462295 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.565997 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.566054 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.566070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.566091 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.566104 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.669329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.669398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.669410 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.669430 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.669441 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.772359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.772429 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.772444 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.772468 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.772489 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.805918 4846 generic.go:334] "Generic (PLEG): container finished" podID="b2776496-08ee-4019-83d5-a487629a1c54" containerID="fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108" exitCode=0 Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.805964 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerDied","Data":"fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.821965 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.838229 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.853813 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.874866 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.876595 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.876702 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.876717 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.876739 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.876754 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.892565 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.910644 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.926730 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.944848 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.961930 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.979804 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.981830 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.981884 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.982230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.982293 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.982307 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:51Z","lastTransitionTime":"2025-12-01T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:51 crc kubenswrapper[4846]: I1201 00:06:51.994236 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:51Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.011043 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.023797 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.037703 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.085602 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.085638 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.085647 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.085663 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.085672 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.188842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.188923 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.188941 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.188963 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.188977 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.294843 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.294899 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.294912 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.294930 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.294943 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.398010 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.398050 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.398059 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.398074 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.398083 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.501082 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.501122 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.501148 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.501163 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.501172 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.579529 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.579748 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:52 crc kubenswrapper[4846]: E1201 00:06:52.579868 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.579921 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:52 crc kubenswrapper[4846]: E1201 00:06:52.580118 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:52 crc kubenswrapper[4846]: E1201 00:06:52.580275 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.603714 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.603750 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.603761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.603821 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.603835 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.707982 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.708019 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.708029 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.708045 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.708055 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.811070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.811426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.811488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.811577 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.811638 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.814532 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" event={"ID":"b2776496-08ee-4019-83d5-a487629a1c54","Type":"ContainerStarted","Data":"b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.838879 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.860456 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.881749 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.900589 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.915392 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.916053 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.916116 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.916134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.916160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.916178 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:52Z","lastTransitionTime":"2025-12-01T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.931769 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.950464 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.967675 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:52 crc kubenswrapper[4846]: I1201 00:06:52.988244 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.008506 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.019714 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.019751 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.019761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.019781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.019796 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.029918 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.046533 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.058748 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.079216 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.123374 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.123423 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.123434 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.123455 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.123467 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.231149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.231224 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.231249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.231293 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.231317 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.334322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.334366 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.334380 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.334398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.334410 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.437753 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.437819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.437829 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.437850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.437863 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.541283 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.541336 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.541348 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.541373 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.541388 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.644307 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.644364 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.644379 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.644402 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.644414 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.747075 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.747163 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.747175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.747194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.747206 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.824574 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.825118 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.825182 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.844935 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.851574 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.851642 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.851656 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.851676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.851715 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.861535 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.862734 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.891537 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.912983 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.937366 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.955654 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.955726 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.955742 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.955762 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.955797 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:53Z","lastTransitionTime":"2025-12-01T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.961068 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.976944 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:53 crc kubenswrapper[4846]: I1201 00:06:53.993110 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:53Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.008798 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.020454 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.038728 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.049674 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.058910 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.058954 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.058965 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.058985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.058998 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.066426 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.083939 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.102374 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.116440 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.128761 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.151058 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.161572 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.161611 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.161624 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.161645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.161657 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.166576 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.181944 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.195654 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.216947 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.232153 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.248519 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.259104 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.260334 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.260500 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.260554 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.260707 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.260805 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:07:10.260755574 +0000 UTC m=+51.041524718 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.260902 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.260973 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:10.260952731 +0000 UTC m=+51.041721845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.261104 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:10.261076605 +0000 UTC m=+51.041845679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.264322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.264372 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.264382 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.264403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.264420 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.274884 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.290595 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.309043 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.361793 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.361852 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362000 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362022 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362037 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362102 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:10.362086027 +0000 UTC m=+51.142855111 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362187 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362251 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362276 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.362385 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:10.362344176 +0000 UTC m=+51.143113430 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.368138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.368207 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.368222 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.368267 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.368285 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.472082 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.472733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.472750 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.472774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.472809 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.575461 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.575498 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.575510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.575528 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.575538 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.579510 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.579555 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.579539 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.579812 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.579918 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:54 crc kubenswrapper[4846]: E1201 00:06:54.580049 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.678694 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.678740 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.678750 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.678769 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.678782 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.781777 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.781827 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.781850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.781870 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.781884 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.828667 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.853583 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.868065 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.881101 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.886589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.886627 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.886644 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.886672 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.886701 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.893521 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.911879 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.929995 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.945426 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.963366 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.978026 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.989477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.989523 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.989535 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.989556 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.989568 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:54Z","lastTransitionTime":"2025-12-01T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:54 crc kubenswrapper[4846]: I1201 00:06:54.994029 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:54Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.011861 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.030315 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.049076 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.065571 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.080454 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:55Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.092470 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.092528 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.092540 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.092558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.092569 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.194853 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.194907 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.194919 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.194940 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.194953 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.297625 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.297712 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.297729 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.297752 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.297768 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.400515 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.400554 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.400564 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.400580 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.400589 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.502661 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.502713 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.502722 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.502748 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.502758 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.605652 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.605707 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.605717 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.605733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.605744 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.708964 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.709030 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.709053 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.709082 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.709100 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.811895 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.811962 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.811987 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.812013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.812030 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.914706 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.914773 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.914785 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.914801 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:55 crc kubenswrapper[4846]: I1201 00:06:55.914813 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:55Z","lastTransitionTime":"2025-12-01T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.020386 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.020449 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.020471 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.020501 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.020523 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.124384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.124450 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.124468 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.124495 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.124512 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.227184 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.227266 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.227291 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.227326 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.227349 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.330508 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.330557 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.330574 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.330600 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.330616 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.433671 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.433764 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.433776 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.433794 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.433808 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.537238 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.537307 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.537326 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.537352 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.537370 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.580169 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.580202 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.580218 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:56 crc kubenswrapper[4846]: E1201 00:06:56.580496 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:56 crc kubenswrapper[4846]: E1201 00:06:56.580645 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:56 crc kubenswrapper[4846]: E1201 00:06:56.580877 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.640353 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.640419 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.640446 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.640482 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.640504 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.743635 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.743704 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.743716 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.743766 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.743777 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.837875 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/0.log" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.842400 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73" exitCode=1 Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.842467 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.843855 4846 scope.go:117] "RemoveContainer" containerID="eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.846438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.846537 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.846562 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.846591 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.846617 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.864133 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.887081 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.905209 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.923013 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.936734 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.948925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.948969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.948983 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.949003 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.949016 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:56Z","lastTransitionTime":"2025-12-01T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.949391 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.960513 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.979623 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:56 crc kubenswrapper[4846]: I1201 00:06:56.997037 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:56Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.008953 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.024798 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.040127 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.053212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.053265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.053281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.053301 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.053317 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.055302 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.067783 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.156255 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.156283 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.156291 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.156304 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.156313 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.259768 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.259809 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.259820 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.259835 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.259844 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.362796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.362862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.362886 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.362918 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.362942 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.466985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.467049 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.467065 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.467093 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.467113 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.570165 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.570228 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.570238 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.570258 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.570271 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.672850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.672911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.672925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.672947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.672960 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.775659 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.775728 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.775742 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.775761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.775773 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.848376 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/0.log" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.851241 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.851646 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.868004 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.878707 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.878762 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.878775 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.878794 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.878806 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.883261 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.895268 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.909624 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.923482 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.937619 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.949860 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.967551 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.980549 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.980582 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.980591 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.980604 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.980612 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:57Z","lastTransitionTime":"2025-12-01T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:57 crc kubenswrapper[4846]: I1201 00:06:57.990116 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:57Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.019096 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.032453 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.045226 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.056411 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.072930 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.082115 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.082141 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.082149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.082163 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.082171 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.184511 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.184558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.184570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.184587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.184599 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.290072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.290394 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.290724 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.290759 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.290776 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.325448 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42"] Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.326208 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.329966 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.330265 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.342392 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.358518 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.358796 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.371534 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.387703 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.393876 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.393900 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.393908 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.393922 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.393932 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.405540 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv76z\" (UniqueName: \"kubernetes.io/projected/a431347f-cbbf-4e17-b470-a08d42a11b86-kube-api-access-kv76z\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.405578 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a431347f-cbbf-4e17-b470-a08d42a11b86-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.405605 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a431347f-cbbf-4e17-b470-a08d42a11b86-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.405661 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a431347f-cbbf-4e17-b470-a08d42a11b86-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.406601 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.418712 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.427650 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.437917 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.448352 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.459551 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.471026 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.481178 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.490488 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.495774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.495808 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.495842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.495861 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.495872 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.506036 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv76z\" (UniqueName: \"kubernetes.io/projected/a431347f-cbbf-4e17-b470-a08d42a11b86-kube-api-access-kv76z\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.506069 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a431347f-cbbf-4e17-b470-a08d42a11b86-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.506100 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a431347f-cbbf-4e17-b470-a08d42a11b86-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.506118 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a431347f-cbbf-4e17-b470-a08d42a11b86-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.507352 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a431347f-cbbf-4e17-b470-a08d42a11b86-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.507512 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a431347f-cbbf-4e17-b470-a08d42a11b86-env-overrides\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.510296 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.511882 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a431347f-cbbf-4e17-b470-a08d42a11b86-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.522351 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.525381 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv76z\" (UniqueName: \"kubernetes.io/projected/a431347f-cbbf-4e17-b470-a08d42a11b86-kube-api-access-kv76z\") pod \"ovnkube-control-plane-749d76644c-f9x42\" (UID: \"a431347f-cbbf-4e17-b470-a08d42a11b86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.532915 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.546614 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.558857 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.570309 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.580152 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.580187 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.580267 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:06:58 crc kubenswrapper[4846]: E1201 00:06:58.580269 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:06:58 crc kubenswrapper[4846]: E1201 00:06:58.580332 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:06:58 crc kubenswrapper[4846]: E1201 00:06:58.580388 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.594716 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.598442 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.598465 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.598473 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.598488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.598499 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.612208 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.625797 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.639389 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.639422 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: W1201 00:06:58.659383 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda431347f_cbbf_4e17_b470_a08d42a11b86.slice/crio-ef458641f6850097a7771a270d83d947aedd90773e90795af357ea777c1ef698 WatchSource:0}: Error finding container ef458641f6850097a7771a270d83d947aedd90773e90795af357ea777c1ef698: Status 404 returned error can't find the container with id ef458641f6850097a7771a270d83d947aedd90773e90795af357ea777c1ef698 Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.659615 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.673066 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.686280 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.703119 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.703710 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.703731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.703739 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.703752 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.703762 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.712762 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.730416 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.742968 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.806315 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.806361 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.806372 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.806390 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.806402 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.855435 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/1.log" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.855945 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/0.log" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.858393 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41" exitCode=1 Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.859556 4846 scope.go:117] "RemoveContainer" containerID="db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41" Dec 01 00:06:58 crc kubenswrapper[4846]: E1201 00:06:58.859799 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.859847 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.859892 4846 scope.go:117] "RemoveContainer" containerID="eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.866164 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" event={"ID":"a431347f-cbbf-4e17-b470-a08d42a11b86","Type":"ContainerStarted","Data":"30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.866208 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" event={"ID":"a431347f-cbbf-4e17-b470-a08d42a11b86","Type":"ContainerStarted","Data":"ef458641f6850097a7771a270d83d947aedd90773e90795af357ea777c1ef698"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.878980 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.893269 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.904326 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.911786 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.911831 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.911843 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.911862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.911874 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:58Z","lastTransitionTime":"2025-12-01T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.916879 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.926663 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.940923 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.954454 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.966216 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.979599 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:58 crc kubenswrapper[4846]: I1201 00:06:58.999421 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:58Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.012056 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.013664 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.013725 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.013738 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.013757 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.013774 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.031179 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.048200 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.069439 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.084419 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.116155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.116185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.116194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.116209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.116218 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.219266 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.219306 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.219314 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.219330 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.219338 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.322482 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.322553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.322578 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.322611 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.322634 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.418467 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rl69z"] Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.418960 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:06:59 crc kubenswrapper[4846]: E1201 00:06:59.419019 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.426188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.426243 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.426264 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.426295 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.426316 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.438500 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.459000 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.474018 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.490373 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.504803 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.517412 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.517574 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mngnh\" (UniqueName: \"kubernetes.io/projected/219022f7-8f31-4021-9df8-733c23b34602-kube-api-access-mngnh\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.528079 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.529553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.529587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.529601 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.529622 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.529636 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.545625 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.564578 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.582641 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.602890 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.619041 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.619117 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mngnh\" (UniqueName: \"kubernetes.io/projected/219022f7-8f31-4021-9df8-733c23b34602-kube-api-access-mngnh\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:06:59 crc kubenswrapper[4846]: E1201 00:06:59.619305 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:06:59 crc kubenswrapper[4846]: E1201 00:06:59.619419 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:00.119392483 +0000 UTC m=+40.900161567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.622356 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.631896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.631931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.631939 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.631953 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.631962 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.632828 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.638959 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mngnh\" (UniqueName: \"kubernetes.io/projected/219022f7-8f31-4021-9df8-733c23b34602-kube-api-access-mngnh\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.650398 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.661589 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.670965 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.684859 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.698666 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.710335 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.723454 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.734156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.734201 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.734231 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.734247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.734257 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.736002 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.748065 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.774482 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eeddbec81412cf8df530686e27330278ca78f68e6f527326a896dfec2414ea73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:55Z\\\",\\\"message\\\":\\\"201 00:06:55.124167 6153 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1201 00:06:55.124204 6153 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1201 00:06:55.124230 6153 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1201 00:06:55.124266 6153 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1201 00:06:55.124361 6153 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1201 00:06:55.124405 6153 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1201 00:06:55.124433 6153 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1201 00:06:55.124410 6153 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1201 00:06:55.124488 6153 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1201 00:06:55.124430 6153 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1201 00:06:55.124371 6153 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1201 00:06:55.124508 6153 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1201 00:06:55.124485 6153 factory.go:656] Stopping watch factory\\\\nI1201 00:06:55.124490 6153 handler.go:208] Removed *v1.Node event handler 2\\\\nI1201 00:06:55.124514 6153 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1201 00:06:55.124550 6153 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.788472 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.801176 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.817664 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.830388 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.836913 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.836965 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.836978 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.837022 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.837036 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.850015 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.868096 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.872091 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/1.log" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.875619 4846 scope.go:117] "RemoveContainer" containerID="db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41" Dec 01 00:06:59 crc kubenswrapper[4846]: E1201 00:06:59.875902 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.877530 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" event={"ID":"a431347f-cbbf-4e17-b470-a08d42a11b86","Type":"ContainerStarted","Data":"2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.888237 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.901107 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.913628 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.923324 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.939353 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.939421 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.939437 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.939458 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.939472 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:06:59Z","lastTransitionTime":"2025-12-01T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.940591 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.959729 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.973057 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:06:59 crc kubenswrapper[4846]: I1201 00:06:59.989519 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:06:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.010570 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.026206 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.036467 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.042106 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.042147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.042156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.042172 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.042184 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.049496 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.059119 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.075761 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.087312 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.097613 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.107600 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.124066 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:00 crc kubenswrapper[4846]: E1201 00:07:00.124202 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:00 crc kubenswrapper[4846]: E1201 00:07:00.124250 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:01.124236011 +0000 UTC m=+41.905005085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.144087 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.144123 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.144135 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.144151 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.144162 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.147097 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.180342 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.219302 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:00Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.246161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.246197 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.246209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.246227 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.246238 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.349829 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.349883 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.349900 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.349921 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.349936 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.454110 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.454170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.454187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.454212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.454229 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.557079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.557160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.557183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.557215 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.557240 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.579881 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.579953 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.580004 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.579948 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:00 crc kubenswrapper[4846]: E1201 00:07:00.580073 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:00 crc kubenswrapper[4846]: E1201 00:07:00.580177 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:00 crc kubenswrapper[4846]: E1201 00:07:00.580710 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:00 crc kubenswrapper[4846]: E1201 00:07:00.580502 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.661023 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.661119 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.661132 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.661151 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.661163 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.771774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.771815 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.771829 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.771849 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.771864 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.875857 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.875927 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.875943 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.875967 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.875982 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.988780 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.988822 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.988830 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.988845 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:00 crc kubenswrapper[4846]: I1201 00:07:00.988854 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:00Z","lastTransitionTime":"2025-12-01T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.092358 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.092445 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.092459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.092476 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.092488 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.136390 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.136581 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.136678 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:03.136648221 +0000 UTC m=+43.917417335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.196074 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.196132 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.196148 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.196171 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.196188 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.275654 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.275790 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.275818 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.275860 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.275896 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.298095 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:01Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.304079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.304156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.304181 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.304212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.304233 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.329164 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:01Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.335674 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.335781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.335799 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.335826 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.335848 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.356776 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:01Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.362091 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.362143 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.362160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.362185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.362201 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.382804 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:01Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.388754 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.388830 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.388842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.388860 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.388873 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.407194 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:01Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:01 crc kubenswrapper[4846]: E1201 00:07:01.407361 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.410160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.410206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.410222 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.410246 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.410264 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.513206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.513280 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.513298 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.513326 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.513343 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.617333 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.617408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.617442 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.617483 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.617507 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.720998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.721062 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.721083 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.721113 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.721138 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.826662 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.826752 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.826771 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.826797 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.826815 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.930543 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.930616 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.930645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.930715 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:01 crc kubenswrapper[4846]: I1201 00:07:01.930747 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:01Z","lastTransitionTime":"2025-12-01T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.033980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.034022 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.034034 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.034051 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.034063 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.136758 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.136825 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.136837 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.136862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.136877 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.241134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.241248 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.241278 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.241310 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.241337 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.346436 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.346484 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.346501 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.346524 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.346541 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.450138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.450187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.450195 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.450210 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.450219 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.552896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.552946 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.552961 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.552981 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.552997 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.579475 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:02 crc kubenswrapper[4846]: E1201 00:07:02.579733 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.579864 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.579904 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.579864 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:02 crc kubenswrapper[4846]: E1201 00:07:02.580074 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:02 crc kubenswrapper[4846]: E1201 00:07:02.580262 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:02 crc kubenswrapper[4846]: E1201 00:07:02.580469 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.656301 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.656378 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.656398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.656426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.656445 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.759810 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.759890 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.759913 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.759942 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.759965 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.863433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.863496 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.863514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.863644 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.863669 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.966138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.966217 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.966239 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.966265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:02 crc kubenswrapper[4846]: I1201 00:07:02.966284 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:02Z","lastTransitionTime":"2025-12-01T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.070575 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.070633 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.070652 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.070675 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.070712 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.161091 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:03 crc kubenswrapper[4846]: E1201 00:07:03.161334 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:03 crc kubenswrapper[4846]: E1201 00:07:03.161492 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:07.161456501 +0000 UTC m=+47.942225755 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.174796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.174849 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.174857 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.174874 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.174884 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.278181 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.278223 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.278232 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.278252 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.278261 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.381401 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.381481 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.381495 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.381541 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.381560 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.484712 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.484756 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.484768 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.484787 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.484801 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.587164 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.587563 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.587584 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.587607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.587624 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.690124 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.690177 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.690190 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.690208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.690219 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.793727 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.793783 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.793800 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.793821 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.793834 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.895867 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.895924 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.895937 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.895957 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.895971 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.999291 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.999332 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.999343 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.999360 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:03 crc kubenswrapper[4846]: I1201 00:07:03.999372 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:03Z","lastTransitionTime":"2025-12-01T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.103803 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.103905 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.103930 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.103962 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.103982 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.208716 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.208756 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.208768 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.208788 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.208801 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.312105 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.312162 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.312179 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.312209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.312228 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.415423 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.415512 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.415538 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.415569 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.415592 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.518432 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.518500 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.518516 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.518543 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.518560 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.580101 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.580156 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.580198 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.580198 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:04 crc kubenswrapper[4846]: E1201 00:07:04.580277 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:04 crc kubenswrapper[4846]: E1201 00:07:04.580397 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:04 crc kubenswrapper[4846]: E1201 00:07:04.580575 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:04 crc kubenswrapper[4846]: E1201 00:07:04.580956 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.622051 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.622143 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.622161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.622190 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.622213 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.725183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.725293 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.725320 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.725361 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.725380 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.832908 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.832985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.833008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.833037 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.833057 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.937236 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.937306 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.937327 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.937365 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:04 crc kubenswrapper[4846]: I1201 00:07:04.937386 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:04Z","lastTransitionTime":"2025-12-01T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.040316 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.040377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.040392 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.040418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.040434 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.143840 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.143914 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.143931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.143958 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.143979 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.247648 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.247730 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.247747 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.247773 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.247789 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.351378 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.351450 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.351473 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.351505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.351529 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.454672 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.454784 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.454806 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.454837 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.454857 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.557806 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.557938 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.557956 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.557982 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.558001 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.661736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.661800 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.661818 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.661842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.661860 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.764994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.765072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.765095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.765127 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.765149 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.868806 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.868886 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.868911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.868942 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.868966 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.972177 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.972242 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.972258 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.972284 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:05 crc kubenswrapper[4846]: I1201 00:07:05.972301 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:05Z","lastTransitionTime":"2025-12-01T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.076858 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.076924 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.076942 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.076969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.076986 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.180426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.180489 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.180506 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.180531 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.180550 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.283484 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.283553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.283579 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.283611 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.283637 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.386466 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.386528 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.386545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.386567 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.386583 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.489708 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.489774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.489790 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.489813 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.489863 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.580126 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.580167 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.580173 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.580309 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:06 crc kubenswrapper[4846]: E1201 00:07:06.580524 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:06 crc kubenswrapper[4846]: E1201 00:07:06.580715 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:06 crc kubenswrapper[4846]: E1201 00:07:06.580869 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:06 crc kubenswrapper[4846]: E1201 00:07:06.580993 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.593723 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.593818 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.593843 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.593877 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.593903 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.697304 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.697389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.697410 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.697435 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.697454 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.799945 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.800015 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.800033 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.800061 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.800080 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.904085 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.904187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.904218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.904252 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:06 crc kubenswrapper[4846]: I1201 00:07:06.904275 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:06Z","lastTransitionTime":"2025-12-01T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.008507 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.008591 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.008612 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.008642 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.008663 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.112628 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.112731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.112755 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.112785 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.112806 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.212515 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:07 crc kubenswrapper[4846]: E1201 00:07:07.212708 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:07 crc kubenswrapper[4846]: E1201 00:07:07.212810 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:15.212785915 +0000 UTC m=+55.993555029 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.215493 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.215544 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.215566 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.215596 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.215618 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.323664 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.323742 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.323760 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.323784 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.323801 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.427290 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.427354 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.427373 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.427413 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.427441 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.530891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.530948 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.530970 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.531000 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.531021 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.634794 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.634862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.634879 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.634904 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.634921 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.738523 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.738591 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.738609 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.738636 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.738651 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.842520 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.842591 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.842610 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.842635 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.842654 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.945013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.945078 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.945096 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.945122 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:07 crc kubenswrapper[4846]: I1201 00:07:07.945139 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:07Z","lastTransitionTime":"2025-12-01T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.048113 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.048186 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.048209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.048238 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.048263 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.152001 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.152054 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.152065 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.152083 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.152092 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.256447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.256512 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.256527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.256553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.256572 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.360430 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.360480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.360495 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.360519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.360533 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.464382 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.464442 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.464462 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.464489 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.464507 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.567580 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.567667 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.567722 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.567754 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.567777 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.580032 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.580064 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.580041 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.580124 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:08 crc kubenswrapper[4846]: E1201 00:07:08.580250 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:08 crc kubenswrapper[4846]: E1201 00:07:08.580347 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:08 crc kubenswrapper[4846]: E1201 00:07:08.580495 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:08 crc kubenswrapper[4846]: E1201 00:07:08.580588 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.671041 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.671114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.671138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.671169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.671190 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.774984 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.775137 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.775161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.775240 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.775272 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.878634 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.878751 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.878780 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.878879 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.878942 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.983064 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.983134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.983157 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.983189 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:08 crc kubenswrapper[4846]: I1201 00:07:08.983211 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:08Z","lastTransitionTime":"2025-12-01T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.086712 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.086771 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.086798 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.086830 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.086854 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.190548 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.190631 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.190656 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.190731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.190759 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.294510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.294596 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.294629 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.294666 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.294727 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.397648 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.397778 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.397805 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.397838 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.397860 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.501065 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.501127 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.501138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.501159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.501171 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.603554 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.603612 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.603622 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.603637 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.603690 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.606861 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.610961 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.623403 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.633444 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.651647 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.679037 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.696315 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.706921 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.706995 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.707015 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.707047 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.707075 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.711657 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.732364 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.752552 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.770734 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.789248 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.806135 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.810410 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.810488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.810510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.810538 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.810556 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.824158 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.837723 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.852914 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.865721 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.881939 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.898896 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.913085 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.913172 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.913188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.913388 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.913431 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:09Z","lastTransitionTime":"2025-12-01T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.917140 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.936767 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.962702 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:09 crc kubenswrapper[4846]: I1201 00:07:09.985729 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:09Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.004806 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.016167 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.016235 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.016247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.016263 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.016274 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.016779 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.030520 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.042449 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.057076 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.071987 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.088966 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.106669 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.118362 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.118405 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.118419 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.118438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.118450 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.121462 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.148086 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.171466 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.188653 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:10Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.222275 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.222355 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.222378 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.222736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.222799 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.325406 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.325461 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.325477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.325499 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.325516 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.351294 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.351555 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:07:42.351505564 +0000 UTC m=+83.132274708 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.351641 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.351749 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.351893 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.351972 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.351997 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:42.351973428 +0000 UTC m=+83.132742532 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.352074 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:42.352018329 +0000 UTC m=+83.132787443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.429185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.429241 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.429258 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.429282 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.429298 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.452957 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.453062 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453153 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453216 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453222 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453239 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453242 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453256 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453301 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:42.45328712 +0000 UTC m=+83.234056194 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.453314 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:42.453309311 +0000 UTC m=+83.234078385 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.533290 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.533362 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.533380 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.533453 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.533514 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.580321 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.580335 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.581235 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.580485 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.580415 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.581476 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.581993 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:10 crc kubenswrapper[4846]: E1201 00:07:10.582090 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.637392 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.637459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.637477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.637503 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.637523 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.740451 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.740505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.740516 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.740540 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.740559 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.844211 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.844267 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.844280 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.844303 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.844320 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.948005 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.948412 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.948895 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.949213 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:10 crc kubenswrapper[4846]: I1201 00:07:10.949524 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:10Z","lastTransitionTime":"2025-12-01T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.053263 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.053330 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.053344 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.053368 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.053387 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.156718 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.156961 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.156970 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.156991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.157008 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.260247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.260375 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.260389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.260411 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.260425 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.363257 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.363313 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.363328 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.363352 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.363365 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.466412 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.466471 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.466488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.466514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.466531 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.526607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.526743 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.526777 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.526809 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.526827 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: E1201 00:07:11.546122 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.552651 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.552761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.552784 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.552812 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.552832 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: E1201 00:07:11.573157 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.578930 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.578989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.579010 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.579038 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.579058 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: E1201 00:07:11.604344 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.610577 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.610836 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.610911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.610984 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.611041 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: E1201 00:07:11.634891 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.644108 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.644438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.644526 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.644613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.644771 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: E1201 00:07:11.661348 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:11Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:11 crc kubenswrapper[4846]: E1201 00:07:11.661870 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.664234 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.664376 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.664528 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.664893 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.665130 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.769241 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.769355 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.769384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.769418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.769458 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.873731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.873817 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.873835 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.873862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.873879 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.978104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.978505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.978710 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.978928 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:11 crc kubenswrapper[4846]: I1201 00:07:11.979098 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:11Z","lastTransitionTime":"2025-12-01T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.083105 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.083502 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.083568 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.083665 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.083760 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.187469 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.187998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.188063 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.188134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.188192 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.291569 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.292092 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.292241 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.292341 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.292437 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.395939 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.395975 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.395985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.396002 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.396012 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.498906 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.498973 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.498997 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.499064 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.499091 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.580319 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.580383 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.580433 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.580351 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:12 crc kubenswrapper[4846]: E1201 00:07:12.580522 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:12 crc kubenswrapper[4846]: E1201 00:07:12.580967 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:12 crc kubenswrapper[4846]: E1201 00:07:12.581168 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:12 crc kubenswrapper[4846]: E1201 00:07:12.580820 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.602592 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.602635 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.602647 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.602667 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.602696 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.706531 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.706605 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.706631 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.706663 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.706720 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.809985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.810056 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.810074 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.810100 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.810117 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.914099 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.914183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.914214 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.914248 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:12 crc kubenswrapper[4846]: I1201 00:07:12.914266 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:12Z","lastTransitionTime":"2025-12-01T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.017437 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.017500 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.017522 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.017552 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.017573 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.121980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.122326 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.122518 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.122713 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.122894 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.227031 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.227383 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.227507 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.227638 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.227796 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.332359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.332869 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.333055 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.333193 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.333363 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.443297 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.443945 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.444053 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.444159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.444240 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.547991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.549186 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.549279 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.549359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.549464 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.652852 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.652915 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.652928 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.652960 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.652975 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.756546 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.756624 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.756636 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.756660 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.756676 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.859654 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.859748 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.859770 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.859799 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.859819 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.963300 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.963364 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.963383 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.963407 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:13 crc kubenswrapper[4846]: I1201 00:07:13.963424 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:13Z","lastTransitionTime":"2025-12-01T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.066793 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.066850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.066859 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.066887 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.066900 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.170561 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.170632 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.170657 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.170725 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.170751 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.274095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.274161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.274182 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.274237 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.274276 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.377172 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.377247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.377271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.377302 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.377322 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.485421 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.485513 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.485526 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.485551 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.485571 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.579859 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.579917 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.579858 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.579913 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.580983 4846 scope.go:117] "RemoveContainer" containerID="db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41" Dec 01 00:07:14 crc kubenswrapper[4846]: E1201 00:07:14.581178 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:14 crc kubenswrapper[4846]: E1201 00:07:14.581547 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:14 crc kubenswrapper[4846]: E1201 00:07:14.581760 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:14 crc kubenswrapper[4846]: E1201 00:07:14.582013 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.590567 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.590631 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.590656 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.590718 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.590746 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.693761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.693829 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.693847 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.693876 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.693898 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.797815 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.797855 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.797872 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.797896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.797913 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.901411 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.901466 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.901477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.901500 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.901514 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:14Z","lastTransitionTime":"2025-12-01T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.941226 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/1.log" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.947934 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971"} Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.950181 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.973397 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:14Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:14 crc kubenswrapper[4846]: I1201 00:07:14.995784 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:14Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.003960 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.004013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.004026 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.004046 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.004056 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.008408 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.023075 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.045241 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.064741 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.079071 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.094236 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.105111 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.112612 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.112660 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.112673 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.112713 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.112726 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.124795 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.140211 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.161653 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.173679 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.191064 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.209139 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.215589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.215634 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.215659 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.215716 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.215732 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.223294 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.241345 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.313283 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:15 crc kubenswrapper[4846]: E1201 00:07:15.313631 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:15 crc kubenswrapper[4846]: E1201 00:07:15.313837 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:07:31.313793102 +0000 UTC m=+72.094578576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.318552 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.318606 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.318616 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.318637 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.318649 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.422189 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.422268 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.422293 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.422325 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.422347 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.526193 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.526269 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.526281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.526323 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.526339 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.629538 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.630076 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.630088 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.630108 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.630120 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.733239 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.733302 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.733313 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.733335 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.733347 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.836386 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.836467 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.836486 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.836519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.836537 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.940273 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.940352 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.940365 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.940389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.940678 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:15Z","lastTransitionTime":"2025-12-01T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.955305 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/2.log" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.956815 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/1.log" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.962285 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971" exitCode=1 Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.962387 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971"} Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.962492 4846 scope.go:117] "RemoveContainer" containerID="db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.963341 4846 scope.go:117] "RemoveContainer" containerID="40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971" Dec 01 00:07:15 crc kubenswrapper[4846]: E1201 00:07:15.965551 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:07:15 crc kubenswrapper[4846]: I1201 00:07:15.988352 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.006273 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.025979 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.044023 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.044091 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.044119 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.044152 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.044176 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.046060 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.075056 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.090047 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.104300 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.117342 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.129971 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.145890 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.148122 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.148176 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.148188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.148209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.148222 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.159139 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.169774 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.187282 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.202729 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.214285 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.234953 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db63dd028e2c2a23741da652522ee05a7935594dacf617277735463dfb675a41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"message\\\":\\\"achine-api-operator-tls service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00702baab \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:8443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: machine-api-operator,},ClusterIP:10.217.5.21,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.21],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1201 00:06:58.390674 6271 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.249453 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:16Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.250603 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.250657 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.250668 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.250706 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.250718 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.353035 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.353312 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.353445 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.353542 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.353648 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.457020 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.457531 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.457827 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.458047 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.458126 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.561212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.561520 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.561651 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.561843 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.561997 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.579733 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.579760 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:16 crc kubenswrapper[4846]: E1201 00:07:16.580048 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:16 crc kubenswrapper[4846]: E1201 00:07:16.580229 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.579772 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:16 crc kubenswrapper[4846]: E1201 00:07:16.580417 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.580806 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:16 crc kubenswrapper[4846]: E1201 00:07:16.581070 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.665433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.665484 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.665496 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.665516 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.665529 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.771438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.772499 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.772780 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.773106 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.773343 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.877752 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.878090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.878200 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.878282 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.878367 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.972151 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/2.log" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.976127 4846 scope.go:117] "RemoveContainer" containerID="40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971" Dec 01 00:07:16 crc kubenswrapper[4846]: E1201 00:07:16.976430 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.980192 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.980321 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.980438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.980520 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:16 crc kubenswrapper[4846]: I1201 00:07:16.980634 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:16Z","lastTransitionTime":"2025-12-01T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.012369 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.030758 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.043599 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.056945 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.068728 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.080555 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.083575 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.083813 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.083878 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.083981 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.084043 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.096619 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.115330 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.133402 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.150801 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.172255 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.187018 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.187820 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.187891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.187911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.187939 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.187961 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.204424 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.221528 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.233872 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.247428 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.262700 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:17Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.290731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.290798 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.290818 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.290841 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.290856 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.393763 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.393840 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.393862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.393897 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.393938 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.497596 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.497652 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.497665 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.497704 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.497718 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.600055 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.600116 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.600135 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.600161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.600179 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.703149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.703207 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.703229 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.703261 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.703285 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.806761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.806834 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.806852 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.806877 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.806894 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.909139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.909188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.909212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.909239 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:17 crc kubenswrapper[4846]: I1201 00:07:17.909255 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:17Z","lastTransitionTime":"2025-12-01T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.012175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.012221 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.012231 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.012251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.012261 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.117094 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.117440 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.117610 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.117730 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.117813 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.221277 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.221622 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.221729 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.221857 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.221949 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.325863 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.325920 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.325934 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.325956 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.325971 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.429201 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.429250 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.429268 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.429287 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.429301 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.533170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.533232 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.533251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.533275 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.533321 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.580196 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.580266 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.580267 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.580886 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:18 crc kubenswrapper[4846]: E1201 00:07:18.581117 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:18 crc kubenswrapper[4846]: E1201 00:07:18.581214 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:18 crc kubenswrapper[4846]: E1201 00:07:18.581282 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:18 crc kubenswrapper[4846]: E1201 00:07:18.581395 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.636659 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.636814 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.636834 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.636852 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.636864 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.740191 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.740244 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.740258 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.740282 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.740296 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.843194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.843267 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.843281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.843311 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.843329 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.947081 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.947146 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.947161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.947191 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:18 crc kubenswrapper[4846]: I1201 00:07:18.947221 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:18Z","lastTransitionTime":"2025-12-01T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.050759 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.050842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.050862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.050889 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.050909 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.154369 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.154430 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.154447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.154471 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.154489 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.257402 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.257472 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.257496 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.257528 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.257555 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.360585 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.360652 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.360675 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.360736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.360760 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.464268 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.464371 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.464384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.464408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.464423 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.567351 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.567462 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.567485 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.567514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.567579 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.597285 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.621391 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.634108 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.649392 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.693416 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.693491 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.693532 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.693568 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.693783 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.693945 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.715266 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.731206 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.744951 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.764304 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.775782 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.786626 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.796220 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.796244 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.796253 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.796271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.796282 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.798773 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.808912 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.819993 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.831228 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.847031 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.865126 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:19Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.899634 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.899676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.899698 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.899717 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:19 crc kubenswrapper[4846]: I1201 00:07:19.899727 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:19Z","lastTransitionTime":"2025-12-01T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.002459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.002540 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.002552 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.002575 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.002588 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.106154 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.106228 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.106244 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.106270 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.106286 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.209786 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.209837 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.209849 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.209866 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.209878 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.313913 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.313994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.314006 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.314030 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.314044 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.416898 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.417240 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.417249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.417265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.417274 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.520472 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.520539 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.520558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.520587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.520610 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.579937 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.579937 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:20 crc kubenswrapper[4846]: E1201 00:07:20.580145 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.579971 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.579954 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:20 crc kubenswrapper[4846]: E1201 00:07:20.580278 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:20 crc kubenswrapper[4846]: E1201 00:07:20.580491 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:20 crc kubenswrapper[4846]: E1201 00:07:20.580629 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.624491 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.624575 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.624601 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.624634 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.624655 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.727945 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.728007 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.728023 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.728048 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.728066 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.831810 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.831863 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.831881 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.831907 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.831927 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.935152 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.935199 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.935208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.935233 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:20 crc kubenswrapper[4846]: I1201 00:07:20.935244 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:20Z","lastTransitionTime":"2025-12-01T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.037929 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.037996 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.038014 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.038038 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.038060 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.148294 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.148396 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.148413 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.148439 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.148453 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.251505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.251545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.251554 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.251571 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.251580 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.355556 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.355611 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.355624 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.355642 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.355651 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.458977 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.459045 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.459063 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.459094 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.459113 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.562302 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.562376 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.562398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.562427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.562446 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.666814 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.666900 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.666923 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.666956 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.666979 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.770526 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.770597 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.770616 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.770646 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.770665 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.849312 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.849360 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.849370 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.849387 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.849399 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: E1201 00:07:21.864818 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:21Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.870126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.870191 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.870218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.870253 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.870279 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: E1201 00:07:21.886773 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:21Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.892194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.892238 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.892252 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.892271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.892282 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: E1201 00:07:21.908827 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:21Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.913014 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.913094 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.913114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.913160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.913179 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: E1201 00:07:21.932057 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:21Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.937168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.937207 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.937219 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.937240 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.937253 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:21 crc kubenswrapper[4846]: E1201 00:07:21.955229 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:21Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:21 crc kubenswrapper[4846]: E1201 00:07:21.955358 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.957126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.957168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.957179 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.957199 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:21 crc kubenswrapper[4846]: I1201 00:07:21.957210 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:21Z","lastTransitionTime":"2025-12-01T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.061035 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.061100 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.061118 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.061144 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.061160 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.164904 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.164963 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.164979 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.165002 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.165018 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.269094 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.269187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.269205 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.269263 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.269286 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.372774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.372911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.372938 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.372974 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.373001 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.476650 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.476766 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.476797 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.476827 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.476845 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.579506 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.579555 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.579599 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.579510 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:22 crc kubenswrapper[4846]: E1201 00:07:22.579742 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:22 crc kubenswrapper[4846]: E1201 00:07:22.579862 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:22 crc kubenswrapper[4846]: E1201 00:07:22.580019 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:22 crc kubenswrapper[4846]: E1201 00:07:22.580160 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.580399 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.580461 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.580480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.580507 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.580522 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.685052 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.685105 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.685114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.685134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.685145 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.788005 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.788071 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.788087 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.788115 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.788136 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.892024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.892074 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.892090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.892115 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.892133 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.995527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.995630 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.995653 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.995708 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:22 crc kubenswrapper[4846]: I1201 00:07:22.995733 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:22Z","lastTransitionTime":"2025-12-01T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.098839 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.098885 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.098896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.098915 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.098927 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.202613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.202726 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.202755 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.202789 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.202815 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.306362 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.306730 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.306807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.306890 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.306965 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.410377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.410923 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.411079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.411259 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.411401 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.514213 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.514262 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.514273 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.514290 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.514301 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.616214 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.616338 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.616363 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.616395 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.616416 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.719761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.719821 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.719836 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.719862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.719882 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.822414 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.822453 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.822461 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.822477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.822485 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.924329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.924376 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.924386 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.924408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:23 crc kubenswrapper[4846]: I1201 00:07:23.924421 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:23Z","lastTransitionTime":"2025-12-01T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.027323 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.027371 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.027383 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.027403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.027415 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.129563 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.129623 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.129632 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.129650 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.129661 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.233082 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.233134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.233149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.233171 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.233192 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.335947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.336004 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.336016 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.336036 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.336048 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.439042 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.439127 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.439142 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.439169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.439183 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.541935 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.541991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.542000 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.542017 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.542028 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.579892 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.579940 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.579988 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.579997 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:24 crc kubenswrapper[4846]: E1201 00:07:24.580026 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:24 crc kubenswrapper[4846]: E1201 00:07:24.580157 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:24 crc kubenswrapper[4846]: E1201 00:07:24.580294 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:24 crc kubenswrapper[4846]: E1201 00:07:24.580417 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.644029 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.644065 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.644073 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.644088 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.644101 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.746090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.746132 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.746141 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.746157 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.746168 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.848766 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.848802 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.848811 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.848832 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.848842 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.951311 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.951359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.951372 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.951390 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:24 crc kubenswrapper[4846]: I1201 00:07:24.951403 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:24Z","lastTransitionTime":"2025-12-01T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.054168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.054255 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.054281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.054318 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.054347 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.161265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.161307 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.161317 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.161336 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.161346 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.264347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.264413 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.264427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.264446 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.264461 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.367271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.367323 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.367336 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.367356 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.367368 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.471087 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.471169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.471183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.471203 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.471215 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.574796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.574854 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.574872 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.574894 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.574911 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.677488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.677549 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.677563 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.677581 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.677595 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.780329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.780377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.780388 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.780407 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.780421 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.884455 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.884560 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.884598 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.884639 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.884665 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.989067 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.989124 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.989135 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.989154 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:25 crc kubenswrapper[4846]: I1201 00:07:25.989165 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:25Z","lastTransitionTime":"2025-12-01T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.092522 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.092565 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.092578 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.092614 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.092673 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.195044 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.195086 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.195096 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.195111 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.195120 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.298592 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.298645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.298658 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.298711 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.298729 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.401655 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.401715 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.401727 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.401743 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.401754 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.505209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.505276 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.505289 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.505306 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.505316 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.579839 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.579895 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.579954 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:26 crc kubenswrapper[4846]: E1201 00:07:26.579973 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.579848 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:26 crc kubenswrapper[4846]: E1201 00:07:26.580059 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:26 crc kubenswrapper[4846]: E1201 00:07:26.580180 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:26 crc kubenswrapper[4846]: E1201 00:07:26.580279 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.607510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.607561 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.607583 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.607613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.607636 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.710962 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.711247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.711334 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.711419 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.711495 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.815018 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.815059 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.815069 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.815086 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.815097 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.918365 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.918404 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.918417 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.918433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:26 crc kubenswrapper[4846]: I1201 00:07:26.918442 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:26Z","lastTransitionTime":"2025-12-01T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.021068 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.021946 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.022039 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.022149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.022249 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.124940 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.125007 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.125024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.125050 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.125067 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.227668 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.227733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.227744 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.227760 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.227770 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.330549 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.330596 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.330609 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.330628 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.330639 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.433991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.434043 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.434057 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.434076 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.434091 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.537131 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.537194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.537212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.537243 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.537271 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.640725 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.641083 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.641155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.641243 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.641325 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.748900 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.748977 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.748993 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.749018 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.749037 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.852671 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.852756 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.852773 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.852799 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.852816 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.956652 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.957214 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.957351 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.957499 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:27 crc kubenswrapper[4846]: I1201 00:07:27.957648 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:27Z","lastTransitionTime":"2025-12-01T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.060071 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.060751 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.060813 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.060833 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.060845 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.163181 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.163243 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.163263 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.163294 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.163313 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.265226 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.265285 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.265296 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.265315 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.265330 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.367672 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.367740 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.367750 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.367769 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.367782 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.470908 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.470955 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.470968 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.470988 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.471001 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.574489 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.574549 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.574570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.574615 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.574643 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.580368 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:28 crc kubenswrapper[4846]: E1201 00:07:28.580464 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.580559 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.580761 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.580819 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:28 crc kubenswrapper[4846]: E1201 00:07:28.580967 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:28 crc kubenswrapper[4846]: E1201 00:07:28.581077 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.581142 4846 scope.go:117] "RemoveContainer" containerID="40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971" Dec 01 00:07:28 crc kubenswrapper[4846]: E1201 00:07:28.581203 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:28 crc kubenswrapper[4846]: E1201 00:07:28.581379 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.677031 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.677074 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.677086 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.677109 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.677125 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.779463 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.779525 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.779545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.779588 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.779626 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.883259 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.883317 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.883338 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.883408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.883429 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.985609 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.985657 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.985704 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.985736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:28 crc kubenswrapper[4846]: I1201 00:07:28.985752 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:28Z","lastTransitionTime":"2025-12-01T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.088952 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.088994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.089005 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.089024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.089038 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.191442 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.191486 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.191496 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.191514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.191526 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.294271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.294329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.294342 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.294365 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.294375 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.397522 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.397579 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.397588 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.397607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.397620 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.500539 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.500585 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.500596 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.500615 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.500629 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.595933 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.604378 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.604417 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.604427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.604448 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.604462 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.610458 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.634061 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.646335 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.659032 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.683242 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.706122 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.706408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.706450 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.706458 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.706478 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.706490 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.724603 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.739176 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.754000 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.767907 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.779623 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.793193 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.804640 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.808989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.809104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.809185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.809258 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.809326 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.822334 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.839560 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.856752 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:29Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.911486 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.911547 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.911558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.911580 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:29 crc kubenswrapper[4846]: I1201 00:07:29.911596 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:29Z","lastTransitionTime":"2025-12-01T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.015191 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.015244 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.015256 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.015276 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.015289 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.117347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.117399 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.117409 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.117428 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.117445 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.220230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.220302 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.220316 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.220344 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.220362 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.323359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.323402 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.323438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.323491 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.323501 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.426480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.426534 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.426543 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.426565 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.426577 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.529426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.529470 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.529480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.529499 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.529510 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.580446 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.580511 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.580472 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.580630 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:30 crc kubenswrapper[4846]: E1201 00:07:30.580628 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:30 crc kubenswrapper[4846]: E1201 00:07:30.580771 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:30 crc kubenswrapper[4846]: E1201 00:07:30.580866 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:30 crc kubenswrapper[4846]: E1201 00:07:30.580920 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.632860 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.632919 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.632930 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.632952 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.633001 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.736380 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.736727 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.736807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.736898 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.736979 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.839559 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.840013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.840109 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.840185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.840274 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.943709 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.944032 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.944101 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.944180 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:30 crc kubenswrapper[4846]: I1201 00:07:30.944253 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:30Z","lastTransitionTime":"2025-12-01T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.047503 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.047557 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.047577 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.047604 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.047624 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.150926 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.151023 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.151049 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.151081 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.151100 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.254484 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.254518 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.254527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.254540 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.254549 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.322967 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:31 crc kubenswrapper[4846]: E1201 00:07:31.323250 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:31 crc kubenswrapper[4846]: E1201 00:07:31.323370 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:03.323346272 +0000 UTC m=+104.104115356 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.358772 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.358837 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.358846 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.358868 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.358881 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.462661 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.462788 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.462807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.462835 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.462854 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.566269 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.566337 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.566360 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.566393 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.566416 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.669715 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.669790 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.669808 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.669834 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.669852 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.773609 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.773665 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.773698 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.773723 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.773738 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.876825 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.876868 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.876880 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.876899 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.876909 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.979948 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.979998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.980007 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.980024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:31 crc kubenswrapper[4846]: I1201 00:07:31.980037 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:31Z","lastTransitionTime":"2025-12-01T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.029398 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/0.log" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.029468 4846 generic.go:334] "Generic (PLEG): container finished" podID="607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c" containerID="e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6" exitCode=1 Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.029516 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerDied","Data":"e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.030121 4846 scope.go:117] "RemoveContainer" containerID="e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.057157 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.072980 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.083249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.083289 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.083301 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.083321 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.083338 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.089757 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.102754 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.124742 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.139627 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.150397 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.163410 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.179678 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.185549 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.185589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.185842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.186012 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.186034 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.188736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.188910 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.189866 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.189987 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.190069 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.199808 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.202905 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.206804 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.206852 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.206871 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.206889 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.206899 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.212807 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.220666 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.231225 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.231587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.231824 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.232178 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.232437 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.236857 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.248036 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.253262 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.253578 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.253676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.253807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.253885 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.254901 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.267443 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.268358 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.272618 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.272967 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.273153 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.273324 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.273735 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.282470 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.288796 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.288927 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.291797 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.291827 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.291837 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.291859 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.291872 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.293763 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.312181 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:32Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.394393 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.394433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.394441 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.394459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.394470 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.497015 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.497078 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.497090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.497110 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.497121 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.579482 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.579706 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.579982 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.580042 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.580195 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.580305 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.580466 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:32 crc kubenswrapper[4846]: E1201 00:07:32.580615 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.599984 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.600065 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.600079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.600106 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.600120 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.703101 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.703189 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.703201 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.703219 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.703230 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.806422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.806457 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.806466 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.806485 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.806497 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.909554 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.909594 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.909606 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.909623 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:32 crc kubenswrapper[4846]: I1201 00:07:32.909634 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:32Z","lastTransitionTime":"2025-12-01T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.012571 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.012603 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.012613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.012629 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.012639 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.034442 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/0.log" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.034510 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerStarted","Data":"f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.049564 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.060753 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.084298 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.114808 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.114981 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.115218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.115230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.115248 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.115258 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.141804 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.165860 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.178994 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.189662 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.201144 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.213143 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.217530 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.217584 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.217597 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.217618 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.217634 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.222999 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.237466 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.252140 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.262597 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.271410 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.281841 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.290708 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:33Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.320297 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.320343 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.320361 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.320382 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.320398 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.425046 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.425112 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.425126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.425149 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.425167 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.527804 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.527846 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.527858 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.527880 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.527894 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.631262 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.631300 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.631310 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.631327 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.631338 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.734165 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.734223 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.734240 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.734267 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.734284 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.836863 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.836978 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.836997 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.837022 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.837038 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.940104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.940144 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.940155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.940176 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:33 crc kubenswrapper[4846]: I1201 00:07:33.940189 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:33Z","lastTransitionTime":"2025-12-01T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.043284 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.043372 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.043392 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.043420 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.043439 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.146475 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.146546 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.146558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.146578 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.146591 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.250121 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.250190 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.250223 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.250255 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.250277 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.353851 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.353922 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.353943 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.353973 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.353991 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.457024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.457110 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.457129 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.457161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.457181 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.560413 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.560498 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.560523 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.560558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.560582 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.579910 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.579921 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.580007 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.580080 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:34 crc kubenswrapper[4846]: E1201 00:07:34.580333 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:34 crc kubenswrapper[4846]: E1201 00:07:34.580424 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:34 crc kubenswrapper[4846]: E1201 00:07:34.580560 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:34 crc kubenswrapper[4846]: E1201 00:07:34.580761 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.662951 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.662991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.663001 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.663019 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.663032 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.766525 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.766604 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.766625 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.766662 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.766716 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.869460 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.869534 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.869558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.869597 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.869625 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.973433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.973480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.973490 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.973507 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:34 crc kubenswrapper[4846]: I1201 00:07:34.973519 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:34Z","lastTransitionTime":"2025-12-01T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.078008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.078126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.078147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.078232 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.078285 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.182159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.182233 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.182256 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.182288 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.182311 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.285570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.285614 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.285628 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.285648 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.285660 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.388873 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.388938 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.388962 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.388996 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.389020 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.492369 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.492452 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.492471 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.492504 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.492525 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.595720 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.595792 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.595820 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.595856 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.595881 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.702330 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.702405 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.702427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.702456 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.702480 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.806139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.806192 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.806203 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.806218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.806228 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.909286 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.909351 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.909369 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.909396 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:35 crc kubenswrapper[4846]: I1201 00:07:35.909414 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:35Z","lastTransitionTime":"2025-12-01T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.012979 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.013068 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.013114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.013145 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.013163 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.116045 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.116088 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.116097 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.116112 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.116124 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.219247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.219291 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.219308 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.219336 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.219356 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.322772 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.322851 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.322886 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.322918 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.322941 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.426451 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.426514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.426532 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.426555 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.426572 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.529390 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.529469 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.529493 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.529524 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.529545 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.580310 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.580406 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.580343 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.580343 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:36 crc kubenswrapper[4846]: E1201 00:07:36.580555 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:36 crc kubenswrapper[4846]: E1201 00:07:36.580645 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:36 crc kubenswrapper[4846]: E1201 00:07:36.580815 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:36 crc kubenswrapper[4846]: E1201 00:07:36.580961 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.632903 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.632966 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.632981 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.633009 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.633024 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.736610 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.736722 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.736748 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.736778 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.736796 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.839325 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.839402 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.839426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.839459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.839484 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.943222 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.943280 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.943294 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.943318 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:36 crc kubenswrapper[4846]: I1201 00:07:36.943334 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:36Z","lastTransitionTime":"2025-12-01T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.046711 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.046779 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.046803 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.046832 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.046855 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.149969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.150021 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.150033 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.150050 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.150061 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.253273 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.253373 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.253391 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.253417 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.253472 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.356952 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.357035 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.357090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.357113 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.357128 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.459778 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.459851 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.459873 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.459896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.459913 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.562883 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.562962 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.562980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.563004 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.563021 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.666832 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.666892 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.666905 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.666926 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.666938 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.770925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.770991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.771006 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.771029 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.771043 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.875647 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.875750 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.875775 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.875804 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.875825 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.978903 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.978980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.979014 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.979048 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:37 crc kubenswrapper[4846]: I1201 00:07:37.979071 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:37Z","lastTransitionTime":"2025-12-01T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.081850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.081920 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.081950 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.081981 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.082006 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.184980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.185037 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.185057 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.185083 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.185101 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.288800 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.288870 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.288914 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.288950 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.288993 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.392260 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.392897 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.392951 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.392988 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.393013 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.496428 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.496480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.496494 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.496515 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.496532 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.580317 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.580349 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.580440 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.580490 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:38 crc kubenswrapper[4846]: E1201 00:07:38.580617 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:38 crc kubenswrapper[4846]: E1201 00:07:38.580771 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:38 crc kubenswrapper[4846]: E1201 00:07:38.580877 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:38 crc kubenswrapper[4846]: E1201 00:07:38.581038 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.599296 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.599346 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.599356 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.599381 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.599393 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.701865 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.701933 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.701942 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.701959 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.701972 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.805314 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.805403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.805421 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.805449 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.805466 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.908081 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.908156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.908170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.908193 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:38 crc kubenswrapper[4846]: I1201 00:07:38.908208 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:38Z","lastTransitionTime":"2025-12-01T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.010748 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.010872 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.010889 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.010912 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.010926 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.114899 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.114937 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.114950 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.114970 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.114983 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.218334 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.218403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.218422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.218455 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.218479 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.321408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.321475 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.321493 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.321519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.321540 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.424811 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.424893 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.424917 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.424947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.424972 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.527404 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.527491 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.527510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.527539 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.527561 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.597393 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.618757 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.630945 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.631332 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.631497 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.631613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.631748 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.638342 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.657674 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.676411 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.696259 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.721536 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.734640 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.734727 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.734746 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.734772 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.734789 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.746978 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.764348 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.783784 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.798283 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.815888 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.833087 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.839155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.839216 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.839237 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.839262 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.839279 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.854152 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.872603 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.890771 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.907536 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:39Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.943183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.943266 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.943290 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.943329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:39 crc kubenswrapper[4846]: I1201 00:07:39.943356 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:39Z","lastTransitionTime":"2025-12-01T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.046207 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.046510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.046645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.046764 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.046852 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.149266 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.149317 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.149327 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.149339 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.149348 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.252954 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.253436 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.253827 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.254106 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.254379 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.358664 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.358752 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.358774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.358806 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.358831 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.462737 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.462796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.462812 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.462839 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.462858 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.567075 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.567155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.567184 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.567216 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.567238 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.579665 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.579751 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.579872 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:40 crc kubenswrapper[4846]: E1201 00:07:40.580077 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:40 crc kubenswrapper[4846]: E1201 00:07:40.580228 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:40 crc kubenswrapper[4846]: E1201 00:07:40.580385 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.580777 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:40 crc kubenswrapper[4846]: E1201 00:07:40.581159 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.670277 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.670333 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.670350 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.670377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.670396 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.773259 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.773677 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.773871 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.774006 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.774139 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.877577 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.877670 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.877727 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.877762 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.877787 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.981126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.981168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.981178 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.981197 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:40 crc kubenswrapper[4846]: I1201 00:07:40.981206 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:40Z","lastTransitionTime":"2025-12-01T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.083927 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.084147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.084159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.084179 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.084191 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.188714 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.188787 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.188806 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.188833 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.188852 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.294245 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.294305 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.294321 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.294344 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.294361 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.397347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.397670 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.397852 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.398006 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.398126 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.501447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.501500 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.501517 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.501552 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.501570 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.581269 4846 scope.go:117] "RemoveContainer" containerID="40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.604575 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.604661 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.604718 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.604752 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.604778 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.707908 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.707987 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.708016 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.708052 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.708079 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.811072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.811124 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.811146 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.811173 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.811192 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.914322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.914396 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.914421 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.914451 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:41 crc kubenswrapper[4846]: I1201 00:07:41.914473 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:41Z","lastTransitionTime":"2025-12-01T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.017759 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.017795 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.017805 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.017821 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.017830 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.120429 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.120476 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.120491 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.120512 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.120525 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.222670 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.222725 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.222734 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.222749 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.222757 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.326418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.326463 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.326476 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.326495 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.326508 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.355905 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.356067 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.356162 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.356122836 +0000 UTC m=+147.136891940 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.356206 4846 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.356282 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.356258631 +0000 UTC m=+147.137027745 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.356312 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.356469 4846 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.356515 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.356501388 +0000 UTC m=+147.137270502 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.429427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.429458 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.429467 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.429483 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.429491 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.457995 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.458333 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.458149 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.458706 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.458873 4846 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.458446 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.459125 4846 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.459139 4846 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.459105 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.459080305 +0000 UTC m=+147.239849429 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.459539 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.459519079 +0000 UTC m=+147.240288193 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.486060 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.486096 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.486107 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.486155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.486168 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.507509 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:42Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.513166 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.513217 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.513245 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.513279 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.513296 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.533791 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:42Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.538392 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.538422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.538435 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.538453 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.538465 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.557298 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:42Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.564016 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.564072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.564090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.564120 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.564139 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.580029 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.580211 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.580565 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.580676 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.580915 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.580949 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.581083 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.581185 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.587080 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:42Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.591807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.591858 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.591876 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.591898 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.591916 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.612273 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:42Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:42 crc kubenswrapper[4846]: E1201 00:07:42.612471 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.618430 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.618474 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.618489 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.618513 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.618527 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.722781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.722830 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.722846 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.722868 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.722883 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.826031 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.826072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.826083 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.826101 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.826111 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.928890 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.928939 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.928953 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.928976 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:42 crc kubenswrapper[4846]: I1201 00:07:42.928992 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:42Z","lastTransitionTime":"2025-12-01T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.032347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.032463 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.032490 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.032543 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.032570 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.078354 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/2.log" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.081102 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.082474 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.101523 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.120408 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.136570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.136621 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.136636 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.136658 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.136670 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.137140 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.155916 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.171555 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.182824 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.197497 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.214012 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.226049 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.238518 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.239829 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.239869 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.239881 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.239902 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.239915 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.252521 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.275617 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.297404 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.311134 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.325393 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.342770 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.342834 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.342857 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.342888 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.342910 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.344095 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.357850 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:43Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.445632 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.445716 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.445733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.445757 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.445773 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.548929 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.548986 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.549004 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.549027 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.549043 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.652420 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.652763 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.652969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.653125 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.653293 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.756248 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.756320 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.756345 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.756376 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.756397 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.859947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.859995 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.860011 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.860035 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.860051 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.963768 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.964247 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.964394 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.964544 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:43 crc kubenswrapper[4846]: I1201 00:07:43.964779 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:43Z","lastTransitionTime":"2025-12-01T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.068067 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.068134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.068147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.068175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.068191 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.089627 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/3.log" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.090814 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/2.log" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.096347 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" exitCode=1 Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.096576 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.096980 4846 scope.go:117] "RemoveContainer" containerID="40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.097876 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:07:44 crc kubenswrapper[4846]: E1201 00:07:44.098920 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.119009 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.143656 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.164394 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.171645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.171735 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.171753 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.171784 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.171803 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.183284 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.209519 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40e8c5de2fc40f47b72eee9aed5dd4463f19ac5c914399322d7c1cc5b8b0b971\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:15Z\\\",\\\"message\\\":\\\"b06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1201 00:07:15.534017 6457 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:15Z is after 2025-08-24T17:21:41Z]\\\\nI1201 00:07:15.534124 6457 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:fal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:44Z\\\",\\\"message\\\":\\\"retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127259 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127272 6809 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-grqqg in node crc\\\\nI1201 00:07:43.127280 6809 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg after 0 failed attempt(s)\\\\nI1201 00:07:43.127288 6809 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127286 6809 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 00:07:43.127304 6809 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 00:07:43.127389 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.227365 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.247431 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.263785 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.276210 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.276265 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.276285 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.276308 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.276324 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.278967 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.295131 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.311886 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.327605 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.341119 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.359530 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.373469 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.379196 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.379653 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.379896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.380224 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.380431 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.387419 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.401430 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:44Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.484139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.484174 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.484182 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.484196 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.484206 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.580522 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.580607 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.580585 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:44 crc kubenswrapper[4846]: E1201 00:07:44.580795 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:44 crc kubenswrapper[4846]: E1201 00:07:44.581000 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:44 crc kubenswrapper[4846]: E1201 00:07:44.581092 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.581215 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:44 crc kubenswrapper[4846]: E1201 00:07:44.581438 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.587155 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.587209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.587228 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.587256 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.587276 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.691317 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.691384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.691409 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.691440 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.691462 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.794615 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.794678 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.794733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.794765 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.794788 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.897902 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.897974 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.897997 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.898027 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:44 crc kubenswrapper[4846]: I1201 00:07:44.898049 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:44Z","lastTransitionTime":"2025-12-01T00:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.001414 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.001476 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.001501 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.001531 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.001551 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.103267 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/3.log" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.103454 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.103521 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.103545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.103576 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.103600 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.109390 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:07:45 crc kubenswrapper[4846]: E1201 00:07:45.109724 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.132564 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.152807 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.173932 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.197063 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.215498 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.216222 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.216292 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.216315 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.216347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.216369 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.232492 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.245389 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.259774 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.273646 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.295294 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.310260 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.321013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.321091 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.321136 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.321160 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.321177 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.332136 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.349919 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.366242 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.378535 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.402192 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:44Z\\\",\\\"message\\\":\\\"retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127259 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127272 6809 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-grqqg in node crc\\\\nI1201 00:07:43.127280 6809 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg after 0 failed attempt(s)\\\\nI1201 00:07:43.127288 6809 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127286 6809 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 00:07:43.127304 6809 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 00:07:43.127389 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.417102 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:45Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.423868 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.423907 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.423917 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.423935 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.423947 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.526533 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.526614 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.526639 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.526673 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.526728 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.630473 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.630523 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.630534 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.630553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.630565 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.733639 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.733741 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.733760 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.733789 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.733809 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.837395 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.837440 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.837451 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.837469 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.837480 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.940795 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.940838 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.940850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.940867 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:45 crc kubenswrapper[4846]: I1201 00:07:45.940877 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:45Z","lastTransitionTime":"2025-12-01T00:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.044781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.044842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.044865 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.044894 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.044912 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.147840 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.147929 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.147952 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.147982 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.148003 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.252017 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.252105 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.252122 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.252177 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.252197 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.355656 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.355769 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.355823 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.355855 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.355876 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.458782 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.458850 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.458872 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.458905 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.458926 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.563498 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.563556 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.563574 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.563598 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.563617 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.580264 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.580287 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.580366 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.580413 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:46 crc kubenswrapper[4846]: E1201 00:07:46.580620 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:46 crc kubenswrapper[4846]: E1201 00:07:46.580939 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:46 crc kubenswrapper[4846]: E1201 00:07:46.580989 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:46 crc kubenswrapper[4846]: E1201 00:07:46.581053 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.666313 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.666361 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.666375 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.666397 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.666411 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.769740 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.769782 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.769793 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.769811 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.769823 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.873834 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.873915 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.873958 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.874008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.874034 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.977095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.977481 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.977494 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.977516 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:46 crc kubenswrapper[4846]: I1201 00:07:46.977526 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:46Z","lastTransitionTime":"2025-12-01T00:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.080855 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.080914 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.080927 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.080947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.080959 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.184959 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.185015 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.185029 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.185052 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.185067 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.288329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.288389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.288406 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.288431 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.288447 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.392016 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.392081 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.392116 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.392150 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.392173 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.496244 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.496306 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.496328 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.496357 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.496377 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.599925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.599994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.600019 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.600043 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.600060 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.704336 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.704404 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.704431 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.704462 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.704486 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.807619 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.807678 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.807750 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.807775 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.807798 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.912862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.912922 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.912942 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.912980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:47 crc kubenswrapper[4846]: I1201 00:07:47.913002 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:47Z","lastTransitionTime":"2025-12-01T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.016459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.016500 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.016514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.016532 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.016544 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.119367 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.119404 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.119440 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.119459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.119471 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.224499 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.224561 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.224584 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.224613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.224633 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.328193 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.328292 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.328316 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.328347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.328364 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.431603 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.431999 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.432175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.432353 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.432529 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.536493 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.536553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.536572 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.536598 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.536615 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.580150 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.580219 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:48 crc kubenswrapper[4846]: E1201 00:07:48.580353 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:48 crc kubenswrapper[4846]: E1201 00:07:48.580480 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.580853 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.580912 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:48 crc kubenswrapper[4846]: E1201 00:07:48.581402 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:48 crc kubenswrapper[4846]: E1201 00:07:48.581524 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.640911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.640968 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.641004 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.641042 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.641066 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.745071 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.745127 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.745144 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.745169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.745186 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.848672 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.848763 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.848780 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.848808 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.848827 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.952532 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.952587 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.952602 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.952626 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:48 crc kubenswrapper[4846]: I1201 00:07:48.952641 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:48Z","lastTransitionTime":"2025-12-01T00:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.055953 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.056008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.056017 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.056042 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.056056 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.159322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.159374 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.159391 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.159416 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.159433 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.263479 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.263566 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.263585 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.263615 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.263634 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.368049 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.368126 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.368147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.368233 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.368256 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.471124 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.471205 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.471230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.471393 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.471427 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.575992 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.576048 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.576059 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.576080 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.576092 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.599575 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.603023 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.621310 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.648941 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.667857 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.679287 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.679334 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.679344 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.679364 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.679376 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.683932 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.703983 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.720578 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.746056 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.765003 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.782637 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.782746 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.782772 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.782810 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.782836 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.782813 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.808590 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.827189 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.854293 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:44Z\\\",\\\"message\\\":\\\"retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127259 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127272 6809 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-grqqg in node crc\\\\nI1201 00:07:43.127280 6809 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg after 0 failed attempt(s)\\\\nI1201 00:07:43.127288 6809 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127286 6809 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 00:07:43.127304 6809 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 00:07:43.127389 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.868431 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.880179 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.884872 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.884906 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.884918 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.884935 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.884946 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.892747 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.905209 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:49Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.988676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.988770 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.988789 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.988818 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:49 crc kubenswrapper[4846]: I1201 00:07:49.988838 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:49Z","lastTransitionTime":"2025-12-01T00:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.091842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.091879 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.091889 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.091907 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.091918 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.195104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.195731 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.196020 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.196133 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.196225 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.299513 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.299559 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.299570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.299589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.299600 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.402968 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.403012 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.403027 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.403104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.403119 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.505844 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.506369 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.506555 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.506735 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.506943 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.580195 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.580204 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.580257 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.580266 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:50 crc kubenswrapper[4846]: E1201 00:07:50.581529 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:50 crc kubenswrapper[4846]: E1201 00:07:50.580944 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:50 crc kubenswrapper[4846]: E1201 00:07:50.581194 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:50 crc kubenswrapper[4846]: E1201 00:07:50.581752 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.611201 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.611617 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.611913 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.612138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.612329 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.715734 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.716123 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.716217 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.716328 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.716433 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.819908 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.819989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.820009 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.820036 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.820054 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.923441 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.923511 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.923529 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.923558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:50 crc kubenswrapper[4846]: I1201 00:07:50.923576 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:50Z","lastTransitionTime":"2025-12-01T00:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.027206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.027270 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.027287 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.027312 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.027329 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.130788 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.130844 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.130862 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.130892 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.130910 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.234138 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.234211 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.234223 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.234243 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.234256 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.337654 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.337782 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.337807 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.337834 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.337851 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.441739 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.441819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.441842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.441873 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.441894 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.544928 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.545079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.545098 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.545127 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.545144 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.596665 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.648399 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.648464 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.648482 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.648510 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.648529 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.752123 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.752205 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.752225 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.752255 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.752275 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.855910 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.855971 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.855990 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.856019 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.856038 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.958790 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.958854 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.958870 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.958896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:51 crc kubenswrapper[4846]: I1201 00:07:51.958913 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:51Z","lastTransitionTime":"2025-12-01T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.063054 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.063132 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.063151 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.063178 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.063196 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.166230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.166297 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.166324 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.166353 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.166374 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.271579 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.271645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.271670 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.271737 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.271765 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.376168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.376230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.376249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.376277 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.376297 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.479774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.479842 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.479860 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.479890 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.479910 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.580600 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.580609 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.580628 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.580671 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.582321 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.582561 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.582758 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.582925 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.583111 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.583188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.583205 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.583229 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.583248 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.686679 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.687359 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.687645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.687931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.688173 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.738444 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.738488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.738504 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.738527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.738543 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.759513 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.765114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.765174 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.765196 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.765221 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.765239 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.781088 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.786262 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.786288 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.786300 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.786318 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.786329 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.805836 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.810829 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.810894 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.810915 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.810944 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.811011 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.830851 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.836341 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.836406 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.836431 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.836462 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.836481 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.857422 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:52Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:52 crc kubenswrapper[4846]: E1201 00:07:52.857668 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.860030 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.860071 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.860089 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.860112 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.860128 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.963541 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.963612 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.963630 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.963670 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:52 crc kubenswrapper[4846]: I1201 00:07:52.963755 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:52Z","lastTransitionTime":"2025-12-01T00:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.067049 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.067130 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.067157 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.067190 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.067211 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.170063 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.170100 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.170114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.170132 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.170142 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.272890 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.272959 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.272976 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.273002 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.273020 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.376175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.376239 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.376257 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.376286 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.376312 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.479606 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.479661 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.479676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.479730 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.479747 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.583447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.583537 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.583559 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.583588 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.583617 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.688008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.688070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.688089 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.688115 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.688134 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.792027 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.792480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.792505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.792537 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.792560 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.895514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.895553 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.895562 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.895578 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.895587 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.997884 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.997950 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.997969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.997994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:53 crc kubenswrapper[4846]: I1201 00:07:53.998013 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:53Z","lastTransitionTime":"2025-12-01T00:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.101322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.101439 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.101460 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.101531 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.101573 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.205021 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.205060 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.205073 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.205092 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.205107 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.308002 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.308053 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.308066 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.308086 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.308098 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.411389 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.411453 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.411470 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.411496 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.411515 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.515188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.515257 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.515279 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.515311 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.515332 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.579901 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.579948 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.579950 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.580003 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:54 crc kubenswrapper[4846]: E1201 00:07:54.580213 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:54 crc kubenswrapper[4846]: E1201 00:07:54.580333 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:54 crc kubenswrapper[4846]: E1201 00:07:54.580529 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:54 crc kubenswrapper[4846]: E1201 00:07:54.580588 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.618950 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.619008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.619031 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.619059 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.619077 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.722448 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.722506 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.722519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.722542 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.722560 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.827791 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.827838 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.827867 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.827911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.827931 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.931075 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.931125 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.931142 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.931165 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:54 crc kubenswrapper[4846]: I1201 00:07:54.931182 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:54Z","lastTransitionTime":"2025-12-01T00:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.034487 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.034595 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.034607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.034626 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.034637 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.138366 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.138438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.138456 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.138483 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.138501 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.241884 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.241951 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.241985 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.242018 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.242039 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.345730 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.345838 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.345869 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.345902 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.345922 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.449712 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.450233 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.450324 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.450784 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.450845 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.554323 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.554396 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.554418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.554448 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.554470 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.657346 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.657420 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.657447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.657477 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.657497 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.760644 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.760713 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.760724 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.760741 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.760755 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.863756 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.863819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.863836 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.863863 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.863886 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.966989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.967104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.967128 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.967154 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:55 crc kubenswrapper[4846]: I1201 00:07:55.967174 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:55Z","lastTransitionTime":"2025-12-01T00:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.070447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.070514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.070538 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.070568 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.070589 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.174145 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.174309 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.174388 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.174479 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.174517 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.278435 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.278580 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.278598 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.278623 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.278639 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.387444 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.387537 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.387563 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.387613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.387639 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.491915 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.491954 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.491964 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.491980 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.491992 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.580236 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.580300 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.580310 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.580261 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:56 crc kubenswrapper[4846]: E1201 00:07:56.580495 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:56 crc kubenswrapper[4846]: E1201 00:07:56.580773 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:56 crc kubenswrapper[4846]: E1201 00:07:56.580947 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:56 crc kubenswrapper[4846]: E1201 00:07:56.581088 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.595413 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.595492 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.595515 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.595547 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.595567 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.698398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.698486 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.698509 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.698539 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.698560 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.801470 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.801545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.801580 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.801612 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.801633 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.904608 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.904709 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.904729 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.904755 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:56 crc kubenswrapper[4846]: I1201 00:07:56.904775 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:56Z","lastTransitionTime":"2025-12-01T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.007302 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.007343 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.007355 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.007373 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.007384 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.111168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.111237 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.111251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.111268 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.111278 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.215327 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.215374 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.215388 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.215408 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.215424 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.318386 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.318466 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.318493 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.318527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.318551 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.422175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.422230 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.422243 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.422261 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.422273 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.526551 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.526769 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.526793 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.526819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.526836 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.630933 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.630987 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.631005 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.631031 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.631047 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.734364 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.734427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.734444 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.734469 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.734487 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.838232 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.838300 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.838319 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.838349 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.838370 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.942505 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.942573 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.942599 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.942634 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:57 crc kubenswrapper[4846]: I1201 00:07:57.942657 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:57Z","lastTransitionTime":"2025-12-01T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.046125 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.046194 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.046218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.046253 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.046273 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.148480 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.148570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.148585 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.148607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.148624 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.252928 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.252992 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.253008 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.253034 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.253047 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.356358 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.356409 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.356422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.356441 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.356458 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.459898 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.460014 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.460033 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.460062 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.460079 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.563419 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.563659 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.563733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.563770 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.563792 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.580265 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.580300 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:07:58 crc kubenswrapper[4846]: E1201 00:07:58.580427 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:07:58 crc kubenswrapper[4846]: E1201 00:07:58.580578 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.580656 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.580676 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:07:58 crc kubenswrapper[4846]: E1201 00:07:58.581153 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:07:58 crc kubenswrapper[4846]: E1201 00:07:58.581312 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.581789 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:07:58 crc kubenswrapper[4846]: E1201 00:07:58.582046 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.666511 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.666571 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.666589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.666624 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.666641 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.769602 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.769666 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.769676 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.769715 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.769732 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.873270 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.873333 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.873350 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.873377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.873394 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.977170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.977251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.977269 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.977296 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:58 crc kubenswrapper[4846]: I1201 00:07:58.977312 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:58Z","lastTransitionTime":"2025-12-01T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.080840 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.080917 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.080934 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.080964 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.080987 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.183989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.184051 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.184073 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.184104 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.184125 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.287339 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.287422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.287488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.287522 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.287544 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.390463 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.390534 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.390559 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.390590 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.390614 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.494157 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.494209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.494229 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.494257 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.494275 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.597260 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.597335 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.597348 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.597409 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.597425 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.602755 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd16b1818e517ec960b65ebe038589e0375a1580d7117b2b62b330182ff148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.627969 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-grsdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2776496-08ee-4019-83d5-a487629a1c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50a2950cd1a5302645116982a650b0b82d2e03014257af43671a79dbd0bcd0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64f23991b793cf5e97e5776cb861cb33b59cab85a7cef8d0f6ed6aa2720f7a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc58188e6854d13c5100468c0c7c4354b54c0dd5f023b5d6cc6da9cd85c73ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1330fe9d0115ca7c04c6fdd6bd3efdb5235a42d38327994b0af52b1d516c888\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9e7285c6290b8e25b9f800628879838c63cd801dec00c55f3e5f0e41e93560f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe119c85339c996657cfac30c69aa02687291b482aaf108ec3e06824f32768db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1baa1c1a80936853d17d201409b84cab12b15a5a4debb2dd7dbea4685a5108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vqx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-grsdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.652350 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da34c7ad-c9ef-4ce8-9ea6-8a6ef7540042\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af9a1c5d14ee062fb3acc5e24f73e804f46f4d15328b799793786cfbc5795580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6844fe574b684796aeb5c8b801fe3e32ca90588a2054ebe60f78858942fb0912\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a900eba423160ba74522113928368c9559e4a65508093fbbbcab6e5b60188437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://838a1002417f130ce2214e32c34dc7d44d677df2fda94ea38d4e9cce126b8da6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4dd99863112ab7429e6d93e787c8d177aec97fb76369bff21baf27cd7e99f10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cff277f2534205b1b1c0132c86d9500766ff160fd760c34d692bb11dd1480d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84cff277f2534205b1b1c0132c86d9500766ff160fd760c34d692bb11dd1480d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b36704ddd308d36586f0365df63e2686f2df761ca47b0f68e1256e4ebba830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b36704ddd308d36586f0365df63e2686f2df761ca47b0f68e1256e4ebba830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0c7b2e68b6e6341a368c5c6058f283428489f545d561dc9939ef3105d7bd0685\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c7b2e68b6e6341a368c5c6058f283428489f545d561dc9939ef3105d7bd0685\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.668951 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.685064 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfe573570a23db7203e2c7c997bb40a6e87d4a41ef76753f8f9dd000caeee8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.701663 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.701741 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.701757 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.701779 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.701794 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.702502 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gzjjx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:31Z\\\",\\\"message\\\":\\\"2025-12-01T00:06:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c\\\\n2025-12-01T00:06:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e7489d9b-cb4d-441b-8dbc-f1983f080e2c to /host/opt/cni/bin/\\\\n2025-12-01T00:06:46Z [verbose] multus-daemon started\\\\n2025-12-01T00:06:46Z [verbose] Readiness Indicator file check\\\\n2025-12-01T00:07:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhpr7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gzjjx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.720380 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-f9qcg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a413db-a45d-4559-b7ee-4c4c9b75a24a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425d86289cd424028fefc97b7f0e52cdb0b997a0e2646ca1635c0341c59b3ef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zwn2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-f9qcg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.741277 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"message\\\":\\\"t denominator for mutating requests\\\\\\\" limit=200\\\\nI1201 00:06:37.690583 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1201 00:06:37.690608 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1201 00:06:37.690618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1201 00:06:37.690621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1201 00:06:37.690623 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1201 00:06:37.690627 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1201 00:06:37.690805 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1201 00:06:37.695726 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2460938590/tls.crt::/tmp/serving-cert-2460938590/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764547582\\\\\\\\\\\\\\\" (2025-12-01 00:06:21 +0000 UTC to 2025-12-31 00:06:22 +0000 UTC (now=2025-12-01 00:06:37.695675758 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696040 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764547592\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764547592\\\\\\\\\\\\\\\" (2025-11-30 23:06:32 +0000 UTC to 2026-11-30 23:06:32 +0000 UTC (now=2025-12-01 00:06:37.696018849 +0000 UTC))\\\\\\\"\\\\nI1201 00:06:37.696124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1201 00:06:37.696212 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF1201 00:06:37.695815 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.758944 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.774962 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pjv9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a49e025b-7c84-4c37-b84b-269c5c74a9b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc51da9da2183e44cf8dbc2d4dec40e8bc1013ecc418328fd70c99dbd62cbcfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6grv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pjv9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.793058 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99e1db13-2f91-4ea1-bd80-621b95287c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52727c3430d9acfe9314b34d4098a8edf09b0adbb777284b5cff67502fc9d18a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91764e7a1f8fb95f29cc1491a21892a149546265fa32024b939f64839d1fd8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91764e7a1f8fb95f29cc1491a21892a149546265fa32024b939f64839d1fd8d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.804947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.805038 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.805086 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.805111 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.805127 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.809563 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6862b4f6-969b-4e1f-9619-7e56cdef26b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4274083c85d1f11724c56bc285fd70d49613b5fbc2a8d40c2b3835f6da534c90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://397e8d6d0342ce96f5a156ad42e2844b426813abd5f2bb07848b14f961b955bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbef607e57a08ee3e062fdc01436371e7cd818840ce7a9377a9f7eb4fec464f7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.825168 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ffa0230-e813-4ad9-a5c6-8c842f3a8aba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c66314728b893b221becb8ccedfb99dee9180d6063a316e68eaf9c0701a4a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e1fbeba2327a313b56361d5163eb6902da1f73534c21a7e0b99dd5e088f4fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ab6889e07d322cf71dda2548ef89f736914c8ae917f5425f7f1891eb9217e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ba1ca2fd0d6e432af97bbb158f9cebeef59bcc4cbde1ee2338594391b6cd441\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.849159 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"358371ac-c594-492b-98ad-0da4bc7d9d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-01T00:07:44Z\\\",\\\"message\\\":\\\"retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127259 6809 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127272 6809 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-grqqg in node crc\\\\nI1201 00:07:43.127280 6809 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-grqqg after 0 failed attempt(s)\\\\nI1201 00:07:43.127288 6809 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-grqqg\\\\nI1201 00:07:43.127286 6809 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1201 00:07:43.127304 6809 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1201 00:07:43.127389 6809 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-01T00:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-01T00:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-01T00:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2xpt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fpx9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.864742 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a431347f-cbbf-4e17-b470-a08d42a11b86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30a5f51c1b46ac294231db45db47af68188b14d31c676c9e33be55d569109b85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e32804a881fe8ae5f0b3f670826bd552f22c8df0145eaed499e7620ea344d9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kv76z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-f9x42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.878291 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rl69z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"219022f7-8f31-4021-9df8-733c23b34602\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mngnh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rl69z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.893488 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23500750f45bb4c494f258db7bb5b020ac39baf538c381a21448e1429213fc19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e60fede12bf9df2affc13f7c03fae988b2bc8471b7d3e193901a0ea533bbdd48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.908933 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.908982 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.908998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.909020 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.909033 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:07:59Z","lastTransitionTime":"2025-12-01T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.909324 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:07:59 crc kubenswrapper[4846]: I1201 00:07:59.921078 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d981647e-2c46-4ad1-afd7-757ef36643f8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-01T00:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dbe3837977f53de8f437ed76156407bde95c012ed17fc82242e20727fa1d736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:06:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6b86g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-01T00:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-grqqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:07:59Z is after 2025-08-24T17:21:41Z" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.012203 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.012292 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.012329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.012365 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.012390 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.115208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.115271 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.115286 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.115312 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.115331 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.218094 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.218154 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.218169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.218188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.218201 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.321470 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.321911 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.322198 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.322442 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.323072 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.426813 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.427311 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.427472 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.427623 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.427804 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.531342 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.531398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.531411 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.531436 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.531454 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.579835 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.579908 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.579904 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.579999 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:00 crc kubenswrapper[4846]: E1201 00:08:00.580175 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:00 crc kubenswrapper[4846]: E1201 00:08:00.580024 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:00 crc kubenswrapper[4846]: E1201 00:08:00.580513 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:00 crc kubenswrapper[4846]: E1201 00:08:00.580611 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.634056 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.634124 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.634137 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.634159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.634176 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.736934 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.737009 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.737031 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.737064 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.737084 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.840161 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.840231 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.840253 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.840287 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.840308 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.943464 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.943520 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.943530 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.943550 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:00 crc kubenswrapper[4846]: I1201 00:08:00.943562 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:00Z","lastTransitionTime":"2025-12-01T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.047312 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.047387 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.047409 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.047443 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.047466 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.151103 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.151175 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.151197 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.151224 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.151244 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.255322 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.255381 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.255397 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.255422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.255441 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.358890 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.358947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.358965 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.358991 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.359011 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.463350 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.463898 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.464119 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.464287 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.464436 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.568772 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.568839 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.568857 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.568882 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.568898 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.672849 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.673253 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.673343 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.673459 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.673541 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.776961 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.777004 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.777013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.777030 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.777045 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.880134 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.880178 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.880188 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.880206 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.880218 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.983549 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.984051 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.984177 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.984295 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:01 crc kubenswrapper[4846]: I1201 00:08:01.984400 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:01Z","lastTransitionTime":"2025-12-01T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.088400 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.088768 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.088994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.089092 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.089163 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.192547 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.192603 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.192620 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.192644 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.192665 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.296282 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.296360 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.296379 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.296409 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.296434 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.399782 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.399846 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.399866 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.399891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.399910 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.503398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.503465 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.503483 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.503511 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.503530 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.579764 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.579844 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.579844 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:02 crc kubenswrapper[4846]: E1201 00:08:02.580009 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.580029 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:02 crc kubenswrapper[4846]: E1201 00:08:02.580155 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:02 crc kubenswrapper[4846]: E1201 00:08:02.580308 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:02 crc kubenswrapper[4846]: E1201 00:08:02.580455 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.607635 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.607734 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.607755 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.607785 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.607804 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.711095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.711168 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.711191 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.711223 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.711247 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.817281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.817350 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.817367 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.817399 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.817418 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.919791 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.919865 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.919893 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.919943 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:02 crc kubenswrapper[4846]: I1201 00:08:02.919968 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:02Z","lastTransitionTime":"2025-12-01T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.023506 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.023579 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.023601 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.023633 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.023655 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.107308 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.107344 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.107352 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.107377 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.107387 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.123704 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:08:03Z is after 2025-08-24T17:21:41Z" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.127169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.127193 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.127202 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.127214 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.127222 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.143221 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:08:03Z is after 2025-08-24T17:21:41Z" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.146703 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.146730 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.146738 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.146751 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.146761 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.162384 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:08:03Z is after 2025-08-24T17:21:41Z" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.165922 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.165949 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.165957 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.165969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.165978 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.183185 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:08:03Z is after 2025-08-24T17:21:41Z" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.187266 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.187289 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.187296 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.187307 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.187315 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.203333 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-01T00:08:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6988692f-f9e5-459a-a6c8-c307d43c0948\\\",\\\"systemUUID\\\":\\\"2d73afc2-2e69-417d-b195-29982d0d72a9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-01T00:08:03Z is after 2025-08-24T17:21:41Z" Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.203470 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.207103 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.207151 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.207165 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.207187 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.207205 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.310022 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.310084 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.310107 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.310139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.310162 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.404592 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.404836 4846 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:08:03 crc kubenswrapper[4846]: E1201 00:08:03.404939 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs podName:219022f7-8f31-4021-9df8-733c23b34602 nodeName:}" failed. No retries permitted until 2025-12-01 00:09:07.404900989 +0000 UTC m=+168.185670093 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs") pod "network-metrics-daemon-rl69z" (UID: "219022f7-8f31-4021-9df8-733c23b34602") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.413490 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.413542 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.413560 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.413585 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.413603 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.517340 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.517401 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.517418 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.517444 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.517464 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.620318 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.620383 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.620404 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.620428 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.620447 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.724416 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.724481 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.724498 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.724525 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.724549 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.828844 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.828933 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.828960 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.828989 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.829008 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.932011 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.932072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.932090 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.932114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:03 crc kubenswrapper[4846]: I1201 00:08:03.932131 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:03Z","lastTransitionTime":"2025-12-01T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.034601 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.034644 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.034655 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.034673 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.034709 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.137338 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.137376 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.137396 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.137413 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.137430 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.240600 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.240681 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.240724 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.240749 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.240762 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.344156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.344216 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.344235 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.344263 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.344286 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.447623 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.447698 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.447766 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.447808 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.447830 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.551110 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.551178 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.551197 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.551227 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.551247 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.579455 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.579546 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:04 crc kubenswrapper[4846]: E1201 00:08:04.579664 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.579763 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:04 crc kubenswrapper[4846]: E1201 00:08:04.579844 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.579516 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:04 crc kubenswrapper[4846]: E1201 00:08:04.579928 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:04 crc kubenswrapper[4846]: E1201 00:08:04.580071 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.654613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.654698 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.654764 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.654798 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.654821 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.758473 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.758562 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.758588 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.758617 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.758639 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.862252 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.862335 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.862358 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.862391 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.862418 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.966004 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.966072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.966089 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.966114 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:04 crc kubenswrapper[4846]: I1201 00:08:04.966131 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:04Z","lastTransitionTime":"2025-12-01T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.071796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.071864 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.071887 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.071917 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.071938 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.176096 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.176186 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.176208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.176233 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.176252 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.279363 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.279406 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.279417 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.279433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.279443 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.382321 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.382385 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.382403 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.382433 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.382452 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.485600 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.485641 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.485653 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.485671 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.485708 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.588853 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.588914 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.588931 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.588956 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.588977 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.692564 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.692635 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.692653 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.692678 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.692730 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.796398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.796471 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.796488 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.796514 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.796531 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.900072 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.900136 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.900159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.900181 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:05 crc kubenswrapper[4846]: I1201 00:08:05.900193 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:05Z","lastTransitionTime":"2025-12-01T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.003637 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.003701 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.003714 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.003733 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.003745 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.107891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.107966 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.107983 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.108011 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.108029 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.211242 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.211304 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.211321 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.211347 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.211366 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.314530 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.314589 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.314607 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.314632 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.314649 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.418946 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.419021 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.419038 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.419063 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.419081 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.521934 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.522198 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.522221 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.522249 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.522270 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.579675 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.579743 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.579677 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.579796 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:06 crc kubenswrapper[4846]: E1201 00:08:06.579941 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:06 crc kubenswrapper[4846]: E1201 00:08:06.580150 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:06 crc kubenswrapper[4846]: E1201 00:08:06.580263 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:06 crc kubenswrapper[4846]: E1201 00:08:06.580370 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.625500 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.625570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.625627 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.625659 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.625681 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.729162 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.729212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.729229 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.729254 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.729271 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.832033 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.832121 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.832139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.832167 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.832183 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.936267 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.936394 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.936426 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.936457 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:06 crc kubenswrapper[4846]: I1201 00:08:06.936477 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:06Z","lastTransitionTime":"2025-12-01T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.040093 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.040171 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.040189 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.040218 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.040238 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.142881 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.142988 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.143003 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.143024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.143043 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.245225 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.245256 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.245266 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.245284 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.245294 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.348825 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.348913 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.348935 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.348969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.348994 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.452022 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.452127 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.452150 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.452185 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.452205 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.555937 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.555994 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.556013 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.556038 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.556055 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.659945 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.660024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.660045 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.660108 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.660125 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.762871 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.762945 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.762970 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.763000 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.763035 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.865609 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.865674 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.865727 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.865781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.865800 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.968481 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.968527 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.968545 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.968570 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:07 crc kubenswrapper[4846]: I1201 00:08:07.968589 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:07Z","lastTransitionTime":"2025-12-01T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.073379 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.073438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.073449 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.073473 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.073486 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.177468 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.177575 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.177602 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.177670 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.177749 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.281073 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.281139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.281159 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.281183 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.281202 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.384819 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.384896 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.384916 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.384947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.384988 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.488612 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.488718 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.488736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.488761 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.488779 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.580230 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.580311 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.580422 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.580549 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:08 crc kubenswrapper[4846]: E1201 00:08:08.580780 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:08 crc kubenswrapper[4846]: E1201 00:08:08.580869 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:08 crc kubenswrapper[4846]: E1201 00:08:08.581029 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:08 crc kubenswrapper[4846]: E1201 00:08:08.581117 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.592300 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.592384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.592399 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.592424 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.592443 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.696043 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.696121 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.696141 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.696169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.696189 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.799014 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.799078 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.799095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.799120 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.799140 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.902251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.902324 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.902341 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.902367 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:08 crc kubenswrapper[4846]: I1201 00:08:08.902387 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:08Z","lastTransitionTime":"2025-12-01T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.006532 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.006610 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.006635 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.006668 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.006742 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.110843 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.110924 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.110944 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.110971 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.110990 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.214223 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.214286 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.214310 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.214339 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.214361 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.317412 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.317481 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.317494 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.317519 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.317532 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.420332 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.420383 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.420398 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.420420 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.420437 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.523673 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.523796 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.523863 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.523914 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.523948 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.630695 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.630751 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.630763 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.630781 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.630792 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.689805 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podStartSLOduration=85.689783763 podStartE2EDuration="1m25.689783763s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.659649736 +0000 UTC m=+110.440418830" watchObservedRunningTime="2025-12-01 00:08:09.689783763 +0000 UTC m=+110.470552857" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.709654 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-f9x42" podStartSLOduration=85.709634245 podStartE2EDuration="1m25.709634245s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.704181867 +0000 UTC m=+110.484950941" watchObservedRunningTime="2025-12-01 00:08:09.709634245 +0000 UTC m=+110.490403329" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.732380 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.732412 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.732422 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.732438 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.732449 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.745222 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=20.745202589 podStartE2EDuration="20.745202589s" podCreationTimestamp="2025-12-01 00:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.742678462 +0000 UTC m=+110.523447536" watchObservedRunningTime="2025-12-01 00:08:09.745202589 +0000 UTC m=+110.525971663" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.798883 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-grsdk" podStartSLOduration=85.798863811 podStartE2EDuration="1m25.798863811s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.79752066 +0000 UTC m=+110.578289754" watchObservedRunningTime="2025-12-01 00:08:09.798863811 +0000 UTC m=+110.579632895" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.824048 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.824024276 podStartE2EDuration="1m32.824024276s" podCreationTimestamp="2025-12-01 00:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.811499201 +0000 UTC m=+110.592268285" watchObservedRunningTime="2025-12-01 00:08:09.824024276 +0000 UTC m=+110.604793360" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.836070 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.836135 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.836152 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.836180 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.836195 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.838585 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pjv9m" podStartSLOduration=85.838565243 podStartE2EDuration="1m25.838565243s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.838168171 +0000 UTC m=+110.618937265" watchObservedRunningTime="2025-12-01 00:08:09.838565243 +0000 UTC m=+110.619334327" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.864642 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gzjjx" podStartSLOduration=85.864622316 podStartE2EDuration="1m25.864622316s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.863889733 +0000 UTC m=+110.644658827" watchObservedRunningTime="2025-12-01 00:08:09.864622316 +0000 UTC m=+110.645391400" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.894272 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-f9qcg" podStartSLOduration=85.894253068 podStartE2EDuration="1m25.894253068s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.879490853 +0000 UTC m=+110.660259937" watchObservedRunningTime="2025-12-01 00:08:09.894253068 +0000 UTC m=+110.675022142" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.911139 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.911117017 podStartE2EDuration="1m32.911117017s" podCreationTimestamp="2025-12-01 00:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.910799297 +0000 UTC m=+110.691568371" watchObservedRunningTime="2025-12-01 00:08:09.911117017 +0000 UTC m=+110.691886091" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.912071 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.912061996 podStartE2EDuration="18.912061996s" podCreationTimestamp="2025-12-01 00:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.894176795 +0000 UTC m=+110.674945869" watchObservedRunningTime="2025-12-01 00:08:09.912061996 +0000 UTC m=+110.692831080" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.938284 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.938320 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.938329 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.938346 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:09 crc kubenswrapper[4846]: I1201 00:08:09.938356 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:09Z","lastTransitionTime":"2025-12-01T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.040910 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.040998 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.041024 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.041061 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.041089 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.144209 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.144286 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.144302 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.144320 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.144357 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.246414 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.246453 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.246465 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.246481 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.246494 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.349969 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.350032 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.350050 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.350079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.350099 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.454374 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.454446 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.454464 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.454492 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.454508 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.558017 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.558100 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.558120 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.558147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.558167 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.579586 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:10 crc kubenswrapper[4846]: E1201 00:08:10.580157 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.579737 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:10 crc kubenswrapper[4846]: E1201 00:08:10.580603 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.579612 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.579811 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:10 crc kubenswrapper[4846]: E1201 00:08:10.581383 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:10 crc kubenswrapper[4846]: E1201 00:08:10.581560 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.661144 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.661207 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.661226 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.661251 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.661271 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.765986 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.766119 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.766139 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.766163 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.766181 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.870018 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.870079 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.870101 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.870131 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.870150 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.973074 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.973125 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.973147 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.973170 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:10 crc kubenswrapper[4846]: I1201 00:08:10.973187 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:10Z","lastTransitionTime":"2025-12-01T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.076736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.076788 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.076804 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.076828 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.076845 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.180476 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.180558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.180583 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.180609 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.180627 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.284640 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.284737 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.284803 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.284836 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.284860 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.388774 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.388871 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.388891 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.388920 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.388943 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.491979 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.492019 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.492029 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.492047 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.492059 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.581589 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:08:11 crc kubenswrapper[4846]: E1201 00:08:11.581916 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fpx9q_openshift-ovn-kubernetes(358371ac-c594-492b-98ad-0da4bc7d9d16)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.594595 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.594645 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.594662 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.594689 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.594732 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.698483 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.698526 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.698538 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.698558 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.698573 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.801780 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.801859 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.801877 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.801905 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.801975 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.906092 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.906182 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.906212 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.906244 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:11 crc kubenswrapper[4846]: I1201 00:08:11.906266 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:11Z","lastTransitionTime":"2025-12-01T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.009326 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.009395 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.009420 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.009447 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.009467 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.113465 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.113540 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.113559 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.113598 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.113634 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.218310 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.218407 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.218427 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.218466 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.218491 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.322078 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.322137 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.322156 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.322186 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.322204 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.426679 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.426736 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.426747 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.426765 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.426776 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.530436 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.530521 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.530542 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.530573 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.530590 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.579751 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.579778 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.579857 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.579772 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:12 crc kubenswrapper[4846]: E1201 00:08:12.580105 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:12 crc kubenswrapper[4846]: E1201 00:08:12.580207 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:12 crc kubenswrapper[4846]: E1201 00:08:12.580444 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:12 crc kubenswrapper[4846]: E1201 00:08:12.580642 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.634384 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.634548 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.634569 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.634630 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.634651 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.738113 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.738169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.738190 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.738214 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.738230 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.842095 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.842169 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.842193 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.842224 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.842245 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.945560 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.945673 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.945714 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.945740 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:12 crc kubenswrapper[4846]: I1201 00:08:12.945766 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:12Z","lastTransitionTime":"2025-12-01T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.048208 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.048328 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.048353 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.048380 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.048401 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:13Z","lastTransitionTime":"2025-12-01T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.150490 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.150563 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.150584 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.150613 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.150639 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:13Z","lastTransitionTime":"2025-12-01T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.252616 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.252649 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.252659 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.252672 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.252705 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:13Z","lastTransitionTime":"2025-12-01T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.355204 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.355281 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.355304 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.355338 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.355361 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:13Z","lastTransitionTime":"2025-12-01T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.457835 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.457902 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.457929 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.457971 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.457994 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:13Z","lastTransitionTime":"2025-12-01T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.519877 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.519925 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.519939 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.519968 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.519980 4846 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-01T00:08:13Z","lastTransitionTime":"2025-12-01T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.592497 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=64.592424406 podStartE2EDuration="1m4.592424406s" podCreationTimestamp="2025-12-01 00:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:09.922568719 +0000 UTC m=+110.703337803" watchObservedRunningTime="2025-12-01 00:08:13.592424406 +0000 UTC m=+114.373193540" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.597580 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd"] Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.598504 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.601981 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.602102 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.602026 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.602193 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.736454 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/afd28f27-9f0f-4313-88ee-b90837ee5ea6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.736999 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/afd28f27-9f0f-4313-88ee-b90837ee5ea6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.737155 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afd28f27-9f0f-4313-88ee-b90837ee5ea6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.737290 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afd28f27-9f0f-4313-88ee-b90837ee5ea6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.737391 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afd28f27-9f0f-4313-88ee-b90837ee5ea6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.839383 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afd28f27-9f0f-4313-88ee-b90837ee5ea6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.839472 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afd28f27-9f0f-4313-88ee-b90837ee5ea6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.839572 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/afd28f27-9f0f-4313-88ee-b90837ee5ea6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.839625 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/afd28f27-9f0f-4313-88ee-b90837ee5ea6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.839810 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afd28f27-9f0f-4313-88ee-b90837ee5ea6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.839956 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/afd28f27-9f0f-4313-88ee-b90837ee5ea6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.840075 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/afd28f27-9f0f-4313-88ee-b90837ee5ea6-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.840640 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afd28f27-9f0f-4313-88ee-b90837ee5ea6-service-ca\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.850897 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afd28f27-9f0f-4313-88ee-b90837ee5ea6-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.870925 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afd28f27-9f0f-4313-88ee-b90837ee5ea6-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-l97rd\" (UID: \"afd28f27-9f0f-4313-88ee-b90837ee5ea6\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:13 crc kubenswrapper[4846]: I1201 00:08:13.929335 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.229645 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" event={"ID":"afd28f27-9f0f-4313-88ee-b90837ee5ea6","Type":"ContainerStarted","Data":"d358d5d69119fa0ceb3b34b983b9e87d8cf917687f049a52b9c9ba5715fabbc8"} Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.230092 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" event={"ID":"afd28f27-9f0f-4313-88ee-b90837ee5ea6","Type":"ContainerStarted","Data":"1fcb4e159371c391494035817a7fd0ae29571551dbf4895ea8711ef2bfa707ba"} Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.257133 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-l97rd" podStartSLOduration=90.257100646 podStartE2EDuration="1m30.257100646s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:14.255163306 +0000 UTC m=+115.035932420" watchObservedRunningTime="2025-12-01 00:08:14.257100646 +0000 UTC m=+115.037869760" Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.579483 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.579556 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.579789 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:14 crc kubenswrapper[4846]: I1201 00:08:14.579823 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:14 crc kubenswrapper[4846]: E1201 00:08:14.579959 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:14 crc kubenswrapper[4846]: E1201 00:08:14.580174 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:14 crc kubenswrapper[4846]: E1201 00:08:14.580339 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:14 crc kubenswrapper[4846]: E1201 00:08:14.580566 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:16 crc kubenswrapper[4846]: I1201 00:08:16.580016 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:16 crc kubenswrapper[4846]: E1201 00:08:16.580177 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:16 crc kubenswrapper[4846]: I1201 00:08:16.580033 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:16 crc kubenswrapper[4846]: I1201 00:08:16.580282 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:16 crc kubenswrapper[4846]: I1201 00:08:16.580384 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:16 crc kubenswrapper[4846]: E1201 00:08:16.580470 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:16 crc kubenswrapper[4846]: E1201 00:08:16.580534 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:16 crc kubenswrapper[4846]: E1201 00:08:16.580606 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.246151 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/1.log" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.246879 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/0.log" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.246927 4846 generic.go:334] "Generic (PLEG): container finished" podID="607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c" containerID="f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f" exitCode=1 Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.246965 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerDied","Data":"f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f"} Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.247005 4846 scope.go:117] "RemoveContainer" containerID="e43dad93e72c128f3c8ebf9033277d8908512f41b85627e95108cb6b52797af6" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.247547 4846 scope.go:117] "RemoveContainer" containerID="f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f" Dec 01 00:08:18 crc kubenswrapper[4846]: E1201 00:08:18.247794 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gzjjx_openshift-multus(607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c)\"" pod="openshift-multus/multus-gzjjx" podUID="607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.579836 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.579978 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.579981 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:18 crc kubenswrapper[4846]: I1201 00:08:18.580067 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:18 crc kubenswrapper[4846]: E1201 00:08:18.580066 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:18 crc kubenswrapper[4846]: E1201 00:08:18.580296 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:18 crc kubenswrapper[4846]: E1201 00:08:18.580397 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:18 crc kubenswrapper[4846]: E1201 00:08:18.580610 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:19 crc kubenswrapper[4846]: I1201 00:08:19.254599 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/1.log" Dec 01 00:08:19 crc kubenswrapper[4846]: E1201 00:08:19.567257 4846 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 01 00:08:19 crc kubenswrapper[4846]: E1201 00:08:19.705221 4846 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 00:08:20 crc kubenswrapper[4846]: I1201 00:08:20.579822 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:20 crc kubenswrapper[4846]: I1201 00:08:20.579905 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:20 crc kubenswrapper[4846]: E1201 00:08:20.579976 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:20 crc kubenswrapper[4846]: I1201 00:08:20.580025 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:20 crc kubenswrapper[4846]: E1201 00:08:20.580208 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:20 crc kubenswrapper[4846]: I1201 00:08:20.579915 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:20 crc kubenswrapper[4846]: E1201 00:08:20.580628 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:20 crc kubenswrapper[4846]: E1201 00:08:20.580794 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:22 crc kubenswrapper[4846]: I1201 00:08:22.579995 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:22 crc kubenswrapper[4846]: I1201 00:08:22.580045 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:22 crc kubenswrapper[4846]: E1201 00:08:22.581012 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:22 crc kubenswrapper[4846]: I1201 00:08:22.580104 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:22 crc kubenswrapper[4846]: I1201 00:08:22.580094 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:22 crc kubenswrapper[4846]: E1201 00:08:22.581149 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:22 crc kubenswrapper[4846]: E1201 00:08:22.581106 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:22 crc kubenswrapper[4846]: E1201 00:08:22.581288 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:24 crc kubenswrapper[4846]: I1201 00:08:24.579983 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:24 crc kubenswrapper[4846]: I1201 00:08:24.580051 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:24 crc kubenswrapper[4846]: I1201 00:08:24.580118 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:24 crc kubenswrapper[4846]: E1201 00:08:24.580183 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:24 crc kubenswrapper[4846]: E1201 00:08:24.580360 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:24 crc kubenswrapper[4846]: E1201 00:08:24.580578 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:24 crc kubenswrapper[4846]: I1201 00:08:24.580646 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:24 crc kubenswrapper[4846]: E1201 00:08:24.580816 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:24 crc kubenswrapper[4846]: E1201 00:08:24.707382 4846 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 00:08:26 crc kubenswrapper[4846]: I1201 00:08:26.580099 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:26 crc kubenswrapper[4846]: I1201 00:08:26.580182 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:26 crc kubenswrapper[4846]: I1201 00:08:26.580182 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:26 crc kubenswrapper[4846]: I1201 00:08:26.580138 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:26 crc kubenswrapper[4846]: E1201 00:08:26.580345 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:26 crc kubenswrapper[4846]: E1201 00:08:26.580590 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:26 crc kubenswrapper[4846]: E1201 00:08:26.581269 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:26 crc kubenswrapper[4846]: E1201 00:08:26.581409 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:26 crc kubenswrapper[4846]: I1201 00:08:26.581718 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:08:27 crc kubenswrapper[4846]: I1201 00:08:27.288853 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/3.log" Dec 01 00:08:27 crc kubenswrapper[4846]: I1201 00:08:27.292612 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerStarted","Data":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} Dec 01 00:08:27 crc kubenswrapper[4846]: I1201 00:08:27.293122 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:08:27 crc kubenswrapper[4846]: I1201 00:08:27.320807 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podStartSLOduration=103.32078862 podStartE2EDuration="1m43.32078862s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:27.318424347 +0000 UTC m=+128.099193441" watchObservedRunningTime="2025-12-01 00:08:27.32078862 +0000 UTC m=+128.101557704" Dec 01 00:08:27 crc kubenswrapper[4846]: I1201 00:08:27.501376 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rl69z"] Dec 01 00:08:27 crc kubenswrapper[4846]: I1201 00:08:27.501567 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:27 crc kubenswrapper[4846]: E1201 00:08:27.501817 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:28 crc kubenswrapper[4846]: I1201 00:08:28.580092 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:28 crc kubenswrapper[4846]: I1201 00:08:28.580142 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:28 crc kubenswrapper[4846]: I1201 00:08:28.580092 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:28 crc kubenswrapper[4846]: E1201 00:08:28.580293 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:28 crc kubenswrapper[4846]: E1201 00:08:28.580394 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:28 crc kubenswrapper[4846]: E1201 00:08:28.580500 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:29 crc kubenswrapper[4846]: I1201 00:08:29.579953 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:29 crc kubenswrapper[4846]: E1201 00:08:29.580758 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:29 crc kubenswrapper[4846]: E1201 00:08:29.708336 4846 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 01 00:08:30 crc kubenswrapper[4846]: I1201 00:08:30.579860 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:30 crc kubenswrapper[4846]: E1201 00:08:30.580189 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:30 crc kubenswrapper[4846]: I1201 00:08:30.580255 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:30 crc kubenswrapper[4846]: I1201 00:08:30.580257 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:30 crc kubenswrapper[4846]: E1201 00:08:30.580565 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:30 crc kubenswrapper[4846]: I1201 00:08:30.580651 4846 scope.go:117] "RemoveContainer" containerID="f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f" Dec 01 00:08:30 crc kubenswrapper[4846]: E1201 00:08:30.581023 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:31 crc kubenswrapper[4846]: I1201 00:08:31.319285 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/1.log" Dec 01 00:08:31 crc kubenswrapper[4846]: I1201 00:08:31.319663 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerStarted","Data":"cc4d60edb400f37047e3f32cabed2e42ffd672616820147beaed34545b87f90f"} Dec 01 00:08:31 crc kubenswrapper[4846]: I1201 00:08:31.579968 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:31 crc kubenswrapper[4846]: E1201 00:08:31.580136 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:32 crc kubenswrapper[4846]: I1201 00:08:32.580054 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:32 crc kubenswrapper[4846]: E1201 00:08:32.580320 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:32 crc kubenswrapper[4846]: I1201 00:08:32.580643 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:32 crc kubenswrapper[4846]: E1201 00:08:32.580772 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:32 crc kubenswrapper[4846]: I1201 00:08:32.580974 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:32 crc kubenswrapper[4846]: E1201 00:08:32.581058 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:33 crc kubenswrapper[4846]: I1201 00:08:33.580355 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:33 crc kubenswrapper[4846]: E1201 00:08:33.580582 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rl69z" podUID="219022f7-8f31-4021-9df8-733c23b34602" Dec 01 00:08:34 crc kubenswrapper[4846]: I1201 00:08:34.579556 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:34 crc kubenswrapper[4846]: I1201 00:08:34.579623 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:34 crc kubenswrapper[4846]: I1201 00:08:34.580035 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:34 crc kubenswrapper[4846]: E1201 00:08:34.580171 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 01 00:08:34 crc kubenswrapper[4846]: E1201 00:08:34.579874 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 01 00:08:34 crc kubenswrapper[4846]: E1201 00:08:34.580326 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 01 00:08:35 crc kubenswrapper[4846]: I1201 00:08:35.579761 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:08:35 crc kubenswrapper[4846]: I1201 00:08:35.589562 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 00:08:35 crc kubenswrapper[4846]: I1201 00:08:35.589917 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.579903 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.579991 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.580723 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.582828 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.583011 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.583110 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 00:08:36 crc kubenswrapper[4846]: I1201 00:08:36.583273 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.379947 4846 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.425799 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wlrts"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.426494 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.431590 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.432553 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.432708 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.433075 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h9phn"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.433412 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.433664 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.434009 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.434262 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.436996 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.437913 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.438123 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.450181 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.452037 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.452125 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.453367 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.453656 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.453863 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.454151 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.454345 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.454430 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.454500 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.454629 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.454894 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.455872 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.456000 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.456634 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.456821 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.456931 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.457101 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.457381 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.457556 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.457746 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29409120-st7sv"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.459661 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tq72z"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.460054 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.460342 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.460937 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.466061 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hbmbg"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.466798 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.469664 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.469967 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.470096 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.469992 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.470016 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.470539 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.470950 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471047 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471225 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471342 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471429 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471509 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471585 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471656 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471751 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471823 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471899 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.471012 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.478523 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.479028 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.479885 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.480077 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.481124 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.481224 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.481305 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.481407 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.486080 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.486303 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.486445 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.486532 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.486606 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.487810 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.488924 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-67bnn"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.489397 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.489784 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.490202 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.490428 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.493009 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.493427 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.493592 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.497404 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.497473 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.498468 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.499044 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.499219 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.499339 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-pfkf6"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.499779 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.512489 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.513664 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-557lc"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.514994 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-images\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515066 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/60a3bc94-e170-4e44-a5d3-52d353845365-audit-dir\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515116 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-config\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515202 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-etcd-client\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515248 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x5h\" (UniqueName: \"kubernetes.io/projected/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-kube-api-access-84x5h\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515309 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmcs4\" (UniqueName: \"kubernetes.io/projected/60a3bc94-e170-4e44-a5d3-52d353845365-kube-api-access-vmcs4\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515360 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515479 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae52f6b-6c73-4cc2-a074-93a11abb9c98-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515569 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbtf\" (UniqueName: \"kubernetes.io/projected/aae52f6b-6c73-4cc2-a074-93a11abb9c98-kube-api-access-btbtf\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.515597 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-encryption-config\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.517763 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-config\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.529944 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.530318 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.530389 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.530348 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.531237 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.531605 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532249 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-service-ca-bundle\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532308 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-serving-cert\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532331 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532363 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae52f6b-6c73-4cc2-a074-93a11abb9c98-config\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532440 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-audit-policies\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532479 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532515 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-serving-cert\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532539 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532568 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svl7w\" (UniqueName: \"kubernetes.io/projected/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-kube-api-access-svl7w\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.532745 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.533647 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.534140 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.534787 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.534973 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.535140 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.540280 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.540433 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.540950 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.541010 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.541311 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.541349 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.541445 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g2cv9"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.541583 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.542066 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.542181 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.542419 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.542456 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.542608 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.543734 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.544627 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.544930 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.545369 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.550169 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.552932 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wwxb2"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.553575 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.558552 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.559284 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wlrts"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.559362 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.560024 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.560325 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.561932 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.567950 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.568318 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.568537 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.568885 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.570027 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.570933 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.571270 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.571805 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.572034 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.577013 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-st4n7"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.577628 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.577751 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.578090 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.578671 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.579710 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.580172 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.581945 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k87xr"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.582575 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.582774 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.583099 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lgjgk"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.583519 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.583927 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.584019 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.584267 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.584279 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lq6bl"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.584427 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.598392 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.603285 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.603496 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.603706 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.603873 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.605124 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.605325 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mn4nj"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.605555 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.607063 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.607118 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.608246 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.611285 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.611893 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.614252 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.633855 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.633994 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634525 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-images\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634562 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-audit\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634585 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634615 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-serving-cert\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634639 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-stats-auth\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634656 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7mfg\" (UniqueName: \"kubernetes.io/projected/a2cba7d5-652e-4c41-80a0-5477f682832f-kube-api-access-l7mfg\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634671 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlgt\" (UniqueName: \"kubernetes.io/projected/06c91556-9f39-425b-a247-d830eba2643c-kube-api-access-nhlgt\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634721 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-oauth-config\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634747 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/60a3bc94-e170-4e44-a5d3-52d353845365-audit-dir\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634769 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6fj\" (UniqueName: \"kubernetes.io/projected/e342b46d-339c-4903-b2ca-46ee21ba99aa-kube-api-access-5g6fj\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634790 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2cba7d5-652e-4c41-80a0-5477f682832f-trusted-ca\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634811 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg5sc\" (UniqueName: \"kubernetes.io/projected/3c17b590-902c-4863-823c-865652c475c0-kube-api-access-gg5sc\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634835 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-config\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634861 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bt8n\" (UniqueName: \"kubernetes.io/projected/90578ce0-0758-4827-bc5b-d1d8ca39148e-kube-api-access-5bt8n\") pod \"downloads-7954f5f757-557lc\" (UID: \"90578ce0-0758-4827-bc5b-d1d8ca39148e\") " pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634886 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgdk\" (UniqueName: \"kubernetes.io/projected/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-kube-api-access-vpgdk\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634908 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/06c91556-9f39-425b-a247-d830eba2643c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634931 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-service-ca\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634953 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c92de2dd-b856-4062-a258-af0e14f84942-config\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.634978 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw729\" (UniqueName: \"kubernetes.io/projected/83ef7482-dbe0-429c-8cc6-d3dbef3768fb-kube-api-access-xw729\") pod \"cluster-samples-operator-665b6dd947-nhgcq\" (UID: \"83ef7482-dbe0-429c-8cc6-d3dbef3768fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635013 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-etcd-client\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635042 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-config\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635063 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/608d1635-c6ea-474a-9a40-99196daa0ae0-audit-dir\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635082 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2f89446-c3e7-45dc-9170-c954ffa8c445-trusted-ca\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635107 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84x5h\" (UniqueName: \"kubernetes.io/projected/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-kube-api-access-84x5h\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635129 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t824g\" (UniqueName: \"kubernetes.io/projected/608d1635-c6ea-474a-9a40-99196daa0ae0-kube-api-access-t824g\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635153 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-ca\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635174 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-default-certificate\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635195 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635218 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c92de2dd-b856-4062-a258-af0e14f84942-auth-proxy-config\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635241 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635268 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmcs4\" (UniqueName: \"kubernetes.io/projected/60a3bc94-e170-4e44-a5d3-52d353845365-kube-api-access-vmcs4\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635287 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-client\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635308 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8768\" (UniqueName: \"kubernetes.io/projected/b5a36407-b124-4956-b91c-3be1a6cfa4b3-kube-api-access-x8768\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635326 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635343 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae52f6b-6c73-4cc2-a074-93a11abb9c98-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635360 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlm8s\" (UniqueName: \"kubernetes.io/projected/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-kube-api-access-dlm8s\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635376 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635393 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-config\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635410 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-service-ca\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635442 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-trusted-ca-bundle\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635465 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c17b590-902c-4863-823c-865652c475c0-serving-cert\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635480 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkcx\" (UniqueName: \"kubernetes.io/projected/f2f89446-c3e7-45dc-9170-c954ffa8c445-kube-api-access-4tkcx\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635502 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbtf\" (UniqueName: \"kubernetes.io/projected/aae52f6b-6c73-4cc2-a074-93a11abb9c98-kube-api-access-btbtf\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635519 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-encryption-config\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635535 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/608d1635-c6ea-474a-9a40-99196daa0ae0-node-pullsecrets\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635553 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2cba7d5-652e-4c41-80a0-5477f682832f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635570 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-config\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635586 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks7z\" (UniqueName: \"kubernetes.io/projected/c92de2dd-b856-4062-a258-af0e14f84942-kube-api-access-9ks7z\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635605 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-client-ca\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635621 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsc2k\" (UniqueName: \"kubernetes.io/projected/245b49c8-86a4-4b49-83d5-c915905958a3-kube-api-access-fsc2k\") pod \"migrator-59844c95c7-l6szg\" (UID: \"245b49c8-86a4-4b49-83d5-c915905958a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635644 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-service-ca-bundle\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635661 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e969ab94-0cbc-487b-944e-b8b18e633127-serviceca\") pod \"image-pruner-29409120-st7sv\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635660 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-images\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635681 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a047a1e-0e7f-474d-8026-71f3cb40d657-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qjvv5\" (UID: \"2a047a1e-0e7f-474d-8026-71f3cb40d657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635766 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-serving-cert\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635810 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/60a3bc94-e170-4e44-a5d3-52d353845365-audit-dir\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635841 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635861 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-config\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635894 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-oauth-serving-cert\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635916 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f89446-c3e7-45dc-9170-c954ffa8c445-config\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635933 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-etcd-serving-ca\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.635975 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c92de2dd-b856-4062-a258-af0e14f84942-machine-approver-tls\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.636011 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-config\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.636034 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-encryption-config\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.636051 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-image-import-ca\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.636468 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-config\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.637384 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.637459 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.637768 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.638296 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae52f6b-6c73-4cc2-a074-93a11abb9c98-config\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.638336 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-audit-policies\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.638376 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e342b46d-339c-4903-b2ca-46ee21ba99aa-service-ca-bundle\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.638407 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.638435 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6jrm\" (UniqueName: \"kubernetes.io/projected/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-kube-api-access-b6jrm\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.638650 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639137 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639207 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aae52f6b-6c73-4cc2-a074-93a11abb9c98-config\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639287 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639322 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c91556-9f39-425b-a247-d830eba2643c-serving-cert\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639385 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-audit-policies\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639388 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-metrics-certs\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639441 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-serving-cert\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.639607 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.640349 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641657 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641706 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svl7w\" (UniqueName: \"kubernetes.io/projected/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-kube-api-access-svl7w\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641731 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z99k\" (UniqueName: \"kubernetes.io/projected/e969ab94-0cbc-487b-944e-b8b18e633127-kube-api-access-9z99k\") pod \"image-pruner-29409120-st7sv\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641754 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-serving-cert\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641776 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-serving-cert\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641806 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-etcd-client\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641888 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f89446-c3e7-45dc-9170-c954ffa8c445-serving-cert\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641910 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641928 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2cba7d5-652e-4c41-80a0-5477f682832f-metrics-tls\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641946 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-client-ca\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641965 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-config\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.641983 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbr6\" (UniqueName: \"kubernetes.io/projected/2a047a1e-0e7f-474d-8026-71f3cb40d657-kube-api-access-cmbr6\") pod \"control-plane-machine-set-operator-78cbb6b69f-qjvv5\" (UID: \"2a047a1e-0e7f-474d-8026-71f3cb40d657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.642004 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/83ef7482-dbe0-429c-8cc6-d3dbef3768fb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nhgcq\" (UID: \"83ef7482-dbe0-429c-8cc6-d3dbef3768fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.644975 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.645162 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.645743 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/60a3bc94-e170-4e44-a5d3-52d353845365-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.646100 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6rhj6"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.647342 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.648401 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.651373 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-serving-cert\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.653963 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.654400 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.657870 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae52f6b-6c73-4cc2-a074-93a11abb9c98-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.659628 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-config\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.660386 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-service-ca-bundle\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.661340 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-encryption-config\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.662976 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.663660 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/60a3bc94-e170-4e44-a5d3-52d353845365-etcd-client\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.663723 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.675513 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-serving-cert\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.675928 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.680812 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbsfs"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.693338 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.696215 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h9phn"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.696267 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.696384 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.696281 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.697515 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.697591 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6fqvb"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.698053 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.699207 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29409120-st7sv"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.699267 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.699304 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.700649 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.702556 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hbmbg"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.703775 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.705772 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tq72z"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.707320 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g2cv9"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.708505 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.709023 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.709628 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-67bnn"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.710760 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.714077 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.714117 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.715349 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pfkf6"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.716616 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.717692 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lgjgk"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.719362 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.722076 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.723660 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6rhj6"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.725355 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbsfs"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.726724 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cvl5d"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.728127 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.729318 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.731087 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.733429 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kgx79"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.733861 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.736936 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lq6bl"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744089 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744148 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744183 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cp6z\" (UniqueName: \"kubernetes.io/projected/b7519ac9-b09f-4169-bf4d-b6ec5849661c-kube-api-access-7cp6z\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744226 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/06c91556-9f39-425b-a247-d830eba2643c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744256 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-service-ca\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744289 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cddf3e49-1d9d-4ede-9823-4e92a2392585-webhook-cert\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744321 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbb31493-49be-4d93-9da9-9e32b8ba0b99-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744350 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-srv-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744400 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/608d1635-c6ea-474a-9a40-99196daa0ae0-audit-dir\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744428 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2f89446-c3e7-45dc-9170-c954ffa8c445-trusted-ca\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744470 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2stl\" (UniqueName: \"kubernetes.io/projected/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-kube-api-access-z2stl\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744493 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-config\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744514 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c92de2dd-b856-4062-a258-af0e14f84942-auth-proxy-config\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744536 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744561 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbzc\" (UniqueName: \"kubernetes.io/projected/aa818711-83ce-4c93-b048-ef40a01fdb04-kube-api-access-8pbzc\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744584 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-default-certificate\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744624 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-client\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744644 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckzrb\" (UniqueName: \"kubernetes.io/projected/3367d1d5-e9eb-44ad-a1a0-27fba7276c74-kube-api-access-ckzrb\") pod \"package-server-manager-789f6589d5-ct4bd\" (UID: \"3367d1d5-e9eb-44ad-a1a0-27fba7276c74\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744665 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744706 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744928 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c9bfccc-0b80-41f2-84da-512482ad568a-images\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744954 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c17b590-902c-4863-823c-865652c475c0-serving-cert\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.744981 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745008 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2cba7d5-652e-4c41-80a0-5477f682832f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745037 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/608d1635-c6ea-474a-9a40-99196daa0ae0-node-pullsecrets\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745057 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-policies\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745080 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745106 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cddf3e49-1d9d-4ede-9823-4e92a2392585-apiservice-cert\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745131 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/435c27d7-a826-4c55-af67-f0cb995b4447-metrics-tls\") pod \"dns-operator-744455d44c-k87xr\" (UID: \"435c27d7-a826-4c55-af67-f0cb995b4447\") " pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745157 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-srv-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745183 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745205 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d087dc6-2a31-4cae-88e2-283242b45f38-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745230 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-dir\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745270 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsc2k\" (UniqueName: \"kubernetes.io/projected/245b49c8-86a4-4b49-83d5-c915905958a3-kube-api-access-fsc2k\") pod \"migrator-59844c95c7-l6szg\" (UID: \"245b49c8-86a4-4b49-83d5-c915905958a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745333 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e969ab94-0cbc-487b-944e-b8b18e633127-serviceca\") pod \"image-pruner-29409120-st7sv\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745368 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a047a1e-0e7f-474d-8026-71f3cb40d657-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qjvv5\" (UID: \"2a047a1e-0e7f-474d-8026-71f3cb40d657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745396 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d087dc6-2a31-4cae-88e2-283242b45f38-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745448 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-config\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745472 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-oauth-serving-cert\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745503 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-etcd-serving-ca\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745533 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f89446-c3e7-45dc-9170-c954ffa8c445-config\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745563 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb31493-49be-4d93-9da9-9e32b8ba0b99-config\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745588 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sck4l\" (UniqueName: \"kubernetes.io/projected/0386d75d-f624-4e2d-a804-2a9abaec1f71-kube-api-access-sck4l\") pod \"multus-admission-controller-857f4d67dd-lgjgk\" (UID: \"0386d75d-f624-4e2d-a804-2a9abaec1f71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745612 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-encryption-config\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745633 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c92de2dd-b856-4062-a258-af0e14f84942-machine-approver-tls\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745653 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-config\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745670 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-image-import-ca\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745723 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745945 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c9bfccc-0b80-41f2-84da-512482ad568a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745967 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw2m8\" (UniqueName: \"kubernetes.io/projected/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-kube-api-access-kw2m8\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.745992 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6jrm\" (UniqueName: \"kubernetes.io/projected/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-kube-api-access-b6jrm\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746022 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c91556-9f39-425b-a247-d830eba2643c-serving-cert\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746047 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nhw9\" (UniqueName: \"kubernetes.io/projected/8c9bfccc-0b80-41f2-84da-512482ad568a-kube-api-access-9nhw9\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746066 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-metrics-certs\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746102 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d087dc6-2a31-4cae-88e2-283242b45f38-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746126 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z99k\" (UniqueName: \"kubernetes.io/projected/e969ab94-0cbc-487b-944e-b8b18e633127-kube-api-access-9z99k\") pod \"image-pruner-29409120-st7sv\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746143 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f89446-c3e7-45dc-9170-c954ffa8c445-serving-cert\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746188 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746231 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7766s\" (UniqueName: \"kubernetes.io/projected/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-kube-api-access-7766s\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746255 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbr6\" (UniqueName: \"kubernetes.io/projected/2a047a1e-0e7f-474d-8026-71f3cb40d657-kube-api-access-cmbr6\") pod \"control-plane-machine-set-operator-78cbb6b69f-qjvv5\" (UID: \"2a047a1e-0e7f-474d-8026-71f3cb40d657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746273 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c9bfccc-0b80-41f2-84da-512482ad568a-proxy-tls\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746296 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/83ef7482-dbe0-429c-8cc6-d3dbef3768fb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nhgcq\" (UID: \"83ef7482-dbe0-429c-8cc6-d3dbef3768fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746331 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2cba7d5-652e-4c41-80a0-5477f682832f-metrics-tls\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746351 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-config\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746375 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3367d1d5-e9eb-44ad-a1a0-27fba7276c74-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ct4bd\" (UID: \"3367d1d5-e9eb-44ad-a1a0-27fba7276c74\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746394 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746416 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7mfg\" (UniqueName: \"kubernetes.io/projected/a2cba7d5-652e-4c41-80a0-5477f682832f-kube-api-access-l7mfg\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746439 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a045872-6e5e-4a32-a002-af9b49d7be80-proxy-tls\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746460 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa818711-83ce-4c93-b048-ef40a01fdb04-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746477 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-signing-key\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746520 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-stats-auth\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746542 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2cba7d5-652e-4c41-80a0-5477f682832f-trusted-ca\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746571 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg5sc\" (UniqueName: \"kubernetes.io/projected/3c17b590-902c-4863-823c-865652c475c0-kube-api-access-gg5sc\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746601 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv55t\" (UniqueName: \"kubernetes.io/projected/0a045872-6e5e-4a32-a002-af9b49d7be80-kube-api-access-fv55t\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746627 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bt8n\" (UniqueName: \"kubernetes.io/projected/90578ce0-0758-4827-bc5b-d1d8ca39148e-kube-api-access-5bt8n\") pod \"downloads-7954f5f757-557lc\" (UID: \"90578ce0-0758-4827-bc5b-d1d8ca39148e\") " pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746654 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw729\" (UniqueName: \"kubernetes.io/projected/83ef7482-dbe0-429c-8cc6-d3dbef3768fb-kube-api-access-xw729\") pod \"cluster-samples-operator-665b6dd947-nhgcq\" (UID: \"83ef7482-dbe0-429c-8cc6-d3dbef3768fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746697 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgdk\" (UniqueName: \"kubernetes.io/projected/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-kube-api-access-vpgdk\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746730 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c92de2dd-b856-4062-a258-af0e14f84942-config\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746756 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0386d75d-f624-4e2d-a804-2a9abaec1f71-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lgjgk\" (UID: \"0386d75d-f624-4e2d-a804-2a9abaec1f71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746787 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trr5\" (UniqueName: \"kubernetes.io/projected/435c27d7-a826-4c55-af67-f0cb995b4447-kube-api-access-7trr5\") pod \"dns-operator-744455d44c-k87xr\" (UID: \"435c27d7-a826-4c55-af67-f0cb995b4447\") " pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746837 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t824g\" (UniqueName: \"kubernetes.io/projected/608d1635-c6ea-474a-9a40-99196daa0ae0-kube-api-access-t824g\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746859 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-ca\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746931 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746958 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746979 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cddf3e49-1d9d-4ede-9823-4e92a2392585-tmpfs\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.746999 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747022 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8768\" (UniqueName: \"kubernetes.io/projected/b5a36407-b124-4956-b91c-3be1a6cfa4b3-kube-api-access-x8768\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747043 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-serving-cert\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747075 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlm8s\" (UniqueName: \"kubernetes.io/projected/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-kube-api-access-dlm8s\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747107 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-trusted-ca-bundle\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747136 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tkcx\" (UniqueName: \"kubernetes.io/projected/f2f89446-c3e7-45dc-9170-c954ffa8c445-kube-api-access-4tkcx\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747160 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747185 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-config\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747205 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-service-ca\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747233 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa818711-83ce-4c93-b048-ef40a01fdb04-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747254 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb31493-49be-4d93-9da9-9e32b8ba0b99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747276 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-client-ca\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747296 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-config\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747315 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ks7z\" (UniqueName: \"kubernetes.io/projected/c92de2dd-b856-4062-a258-af0e14f84942-kube-api-access-9ks7z\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747340 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a045872-6e5e-4a32-a002-af9b49d7be80-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747360 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747382 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-signing-cabundle\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747402 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfgq4\" (UniqueName: \"kubernetes.io/projected/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-kube-api-access-lfgq4\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747423 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xmj\" (UniqueName: \"kubernetes.io/projected/cddf3e49-1d9d-4ede-9823-4e92a2392585-kube-api-access-62xmj\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747446 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747467 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-profile-collector-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747491 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e342b46d-339c-4903-b2ca-46ee21ba99aa-service-ca-bundle\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747509 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747533 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmmd\" (UniqueName: \"kubernetes.io/projected/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-kube-api-access-kqmmd\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747557 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747584 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-serving-cert\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747603 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747625 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-serving-cert\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747642 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-etcd-client\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747662 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-client-ca\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747698 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2bz\" (UniqueName: \"kubernetes.io/projected/e31602df-d2bc-40de-93be-42600c22a9c1-kube-api-access-ws2bz\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747721 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-audit\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747742 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747766 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-oauth-config\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747812 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.748121 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/06c91556-9f39-425b-a247-d830eba2643c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.749166 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c92de2dd-b856-4062-a258-af0e14f84942-auth-proxy-config\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.749244 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-serving-cert\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.749291 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhlgt\" (UniqueName: \"kubernetes.io/projected/06c91556-9f39-425b-a247-d830eba2643c-kube-api-access-nhlgt\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.749251 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-config\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.749360 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2f89446-c3e7-45dc-9170-c954ffa8c445-trusted-ca\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.749474 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/608d1635-c6ea-474a-9a40-99196daa0ae0-audit-dir\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.750628 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e969ab94-0cbc-487b-944e-b8b18e633127-serviceca\") pod \"image-pruner-29409120-st7sv\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.751241 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.751573 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.751819 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6fj\" (UniqueName: \"kubernetes.io/projected/e342b46d-339c-4903-b2ca-46ee21ba99aa-kube-api-access-5g6fj\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.751993 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-default-certificate\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.747715 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-service-ca\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.753944 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/608d1635-c6ea-474a-9a40-99196daa0ae0-node-pullsecrets\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.754878 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2a047a1e-0e7f-474d-8026-71f3cb40d657-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qjvv5\" (UID: \"2a047a1e-0e7f-474d-8026-71f3cb40d657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.755092 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e342b46d-339c-4903-b2ca-46ee21ba99aa-service-ca-bundle\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.755294 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-config\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.756820 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-image-import-ca\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.756867 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-config\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.757383 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-client-ca\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.757940 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/83ef7482-dbe0-429c-8cc6-d3dbef3768fb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nhgcq\" (UID: \"83ef7482-dbe0-429c-8cc6-d3dbef3768fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.758198 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.758793 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a2cba7d5-652e-4c41-80a0-5477f682832f-metrics-tls\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.759239 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c91556-9f39-425b-a247-d830eba2643c-serving-cert\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.759293 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-metrics-certs\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.759407 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-st4n7"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.759547 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-audit\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.759881 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c17b590-902c-4863-823c-865652c475c0-serving-cert\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.761329 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.762529 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-encryption-config\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.762614 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c92de2dd-b856-4062-a258-af0e14f84942-machine-approver-tls\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.762106 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-etcd-client\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.763099 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c92de2dd-b856-4062-a258-af0e14f84942-config\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.763397 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f89446-c3e7-45dc-9170-c954ffa8c445-config\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.763476 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-oauth-serving-cert\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.763510 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-client-ca\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.763727 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/608d1635-c6ea-474a-9a40-99196daa0ae0-etcd-serving-ca\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.764074 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.764201 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-serving-cert\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.764984 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-config\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.765244 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.765294 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2cba7d5-652e-4c41-80a0-5477f682832f-trusted-ca\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.765970 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e342b46d-339c-4903-b2ca-46ee21ba99aa-stats-auth\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.766169 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b5a36407-b124-4956-b91c-3be1a6cfa4b3-console-oauth-config\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.766242 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a36407-b124-4956-b91c-3be1a6cfa4b3-trusted-ca-bundle\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.767851 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/608d1635-c6ea-474a-9a40-99196daa0ae0-serving-cert\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.768247 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-service-ca\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.768297 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-ca\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.768636 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-etcd-client\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.768754 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-serving-cert\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.768879 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-config\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.769259 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f89446-c3e7-45dc-9170-c954ffa8c445-serving-cert\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.769417 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-serving-cert\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.770455 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k87xr"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.771791 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.773490 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.775984 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.777311 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.778704 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.780791 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cvl5d"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.782022 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-557lc"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.783430 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mn4nj"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.784636 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.786781 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6fqvb"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.788604 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.788738 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.789885 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.791370 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nq2mw"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.792738 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.793368 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nq2mw"] Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.808814 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.828780 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.849209 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852597 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d087dc6-2a31-4cae-88e2-283242b45f38-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852654 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7766s\" (UniqueName: \"kubernetes.io/projected/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-kube-api-access-7766s\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852705 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c9bfccc-0b80-41f2-84da-512482ad568a-proxy-tls\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852732 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3367d1d5-e9eb-44ad-a1a0-27fba7276c74-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ct4bd\" (UID: \"3367d1d5-e9eb-44ad-a1a0-27fba7276c74\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852757 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852778 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa818711-83ce-4c93-b048-ef40a01fdb04-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852801 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-signing-key\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852831 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a045872-6e5e-4a32-a002-af9b49d7be80-proxy-tls\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852861 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv55t\" (UniqueName: \"kubernetes.io/projected/0a045872-6e5e-4a32-a002-af9b49d7be80-kube-api-access-fv55t\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852937 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.852974 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8z6\" (UniqueName: \"kubernetes.io/projected/31064592-6043-412a-82e6-4eb313fa16a3-kube-api-access-cq8z6\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853008 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0386d75d-f624-4e2d-a804-2a9abaec1f71-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lgjgk\" (UID: \"0386d75d-f624-4e2d-a804-2a9abaec1f71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853046 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trr5\" (UniqueName: \"kubernetes.io/projected/435c27d7-a826-4c55-af67-f0cb995b4447-kube-api-access-7trr5\") pod \"dns-operator-744455d44c-k87xr\" (UID: \"435c27d7-a826-4c55-af67-f0cb995b4447\") " pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853090 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31064592-6043-412a-82e6-4eb313fa16a3-secret-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853160 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853194 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cddf3e49-1d9d-4ede-9823-4e92a2392585-tmpfs\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853221 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853250 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-serving-cert\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853274 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853303 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa818711-83ce-4c93-b048-ef40a01fdb04-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853320 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb31493-49be-4d93-9da9-9e32b8ba0b99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853340 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-config\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853359 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a045872-6e5e-4a32-a002-af9b49d7be80-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853375 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853397 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xmj\" (UniqueName: \"kubernetes.io/projected/cddf3e49-1d9d-4ede-9823-4e92a2392585-kube-api-access-62xmj\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853422 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-signing-cabundle\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853456 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfgq4\" (UniqueName: \"kubernetes.io/projected/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-kube-api-access-lfgq4\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853479 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmmd\" (UniqueName: \"kubernetes.io/projected/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-kube-api-access-kqmmd\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853544 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853581 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-profile-collector-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853611 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853644 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853675 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2bz\" (UniqueName: \"kubernetes.io/projected/e31602df-d2bc-40de-93be-42600c22a9c1-kube-api-access-ws2bz\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853754 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853790 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853817 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853842 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cp6z\" (UniqueName: \"kubernetes.io/projected/b7519ac9-b09f-4169-bf4d-b6ec5849661c-kube-api-access-7cp6z\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853869 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cddf3e49-1d9d-4ede-9823-4e92a2392585-webhook-cert\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853893 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-srv-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853931 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbb31493-49be-4d93-9da9-9e32b8ba0b99-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853956 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2stl\" (UniqueName: \"kubernetes.io/projected/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-kube-api-access-z2stl\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.853996 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbzc\" (UniqueName: \"kubernetes.io/projected/aa818711-83ce-4c93-b048-ef40a01fdb04-kube-api-access-8pbzc\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854263 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cddf3e49-1d9d-4ede-9823-4e92a2392585-tmpfs\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854311 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckzrb\" (UniqueName: \"kubernetes.io/projected/3367d1d5-e9eb-44ad-a1a0-27fba7276c74-kube-api-access-ckzrb\") pod \"package-server-manager-789f6589d5-ct4bd\" (UID: \"3367d1d5-e9eb-44ad-a1a0-27fba7276c74\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854355 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854374 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854386 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a045872-6e5e-4a32-a002-af9b49d7be80-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854394 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c9bfccc-0b80-41f2-84da-512482ad568a-images\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854459 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38a0117-8d8d-4fba-973f-ce27e8fbf920-config\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854500 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-policies\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854542 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854577 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cddf3e49-1d9d-4ede-9823-4e92a2392585-apiservice-cert\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854602 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/435c27d7-a826-4c55-af67-f0cb995b4447-metrics-tls\") pod \"dns-operator-744455d44c-k87xr\" (UID: \"435c27d7-a826-4c55-af67-f0cb995b4447\") " pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854621 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-srv-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854639 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-dir\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854676 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854713 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d087dc6-2a31-4cae-88e2-283242b45f38-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854735 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d087dc6-2a31-4cae-88e2-283242b45f38-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854761 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-dir\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854797 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sck4l\" (UniqueName: \"kubernetes.io/projected/0386d75d-f624-4e2d-a804-2a9abaec1f71-kube-api-access-sck4l\") pod \"multus-admission-controller-857f4d67dd-lgjgk\" (UID: \"0386d75d-f624-4e2d-a804-2a9abaec1f71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854822 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb31493-49be-4d93-9da9-9e32b8ba0b99-config\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854847 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c38a0117-8d8d-4fba-973f-ce27e8fbf920-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854883 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854909 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c9bfccc-0b80-41f2-84da-512482ad568a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854941 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw2m8\" (UniqueName: \"kubernetes.io/projected/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-kube-api-access-kw2m8\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854963 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nhw9\" (UniqueName: \"kubernetes.io/projected/8c9bfccc-0b80-41f2-84da-512482ad568a-kube-api-access-9nhw9\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854988 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c38a0117-8d8d-4fba-973f-ce27e8fbf920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.854996 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c9bfccc-0b80-41f2-84da-512482ad568a-images\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.855635 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c9bfccc-0b80-41f2-84da-512482ad568a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.856471 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c9bfccc-0b80-41f2-84da-512482ad568a-proxy-tls\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.857139 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cddf3e49-1d9d-4ede-9823-4e92a2392585-webhook-cert\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.859972 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cddf3e49-1d9d-4ede-9823-4e92a2392585-apiservice-cert\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.869233 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.889272 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.909136 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.915153 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-config\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.929703 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.936332 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-serving-cert\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.948878 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.956633 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.956743 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8z6\" (UniqueName: \"kubernetes.io/projected/31064592-6043-412a-82e6-4eb313fa16a3-kube-api-access-cq8z6\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.956788 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31064592-6043-412a-82e6-4eb313fa16a3-secret-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.957133 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38a0117-8d8d-4fba-973f-ce27e8fbf920-config\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.957225 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c38a0117-8d8d-4fba-973f-ce27e8fbf920-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.957288 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c38a0117-8d8d-4fba-973f-ce27e8fbf920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.968525 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 00:08:44 crc kubenswrapper[4846]: I1201 00:08:44.989116 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.010065 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.028743 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.036967 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa818711-83ce-4c93-b048-ef40a01fdb04-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.049308 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.054218 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa818711-83ce-4c93-b048-ef40a01fdb04-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.069511 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.089554 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.097869 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3367d1d5-e9eb-44ad-a1a0-27fba7276c74-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ct4bd\" (UID: \"3367d1d5-e9eb-44ad-a1a0-27fba7276c74\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.108810 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.129514 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.137837 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/435c27d7-a826-4c55-af67-f0cb995b4447-metrics-tls\") pod \"dns-operator-744455d44c-k87xr\" (UID: \"435c27d7-a826-4c55-af67-f0cb995b4447\") " pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.148801 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.168961 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.189784 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.209351 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.229416 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.238597 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d087dc6-2a31-4cae-88e2-283242b45f38-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.248803 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.257300 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d087dc6-2a31-4cae-88e2-283242b45f38-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.281608 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.284855 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.289136 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.296634 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a045872-6e5e-4a32-a002-af9b49d7be80-proxy-tls\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.309348 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.317186 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.329746 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.337742 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.349446 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.358061 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.382498 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.387640 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.388400 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.396473 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.413427 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.417515 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.430926 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.437851 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.449789 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.469348 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.478189 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.489839 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.508793 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.529729 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.536062 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-policies\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.549517 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.554733 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.568910 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.574300 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.588998 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.607251 4846 request.go:700] Waited for 1.001337053s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.609027 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.617062 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0386d75d-f624-4e2d-a804-2a9abaec1f71-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lgjgk\" (UID: \"0386d75d-f624-4e2d-a804-2a9abaec1f71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.629734 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.649320 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.654783 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.670283 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.689563 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.698001 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.709799 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.729906 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.769538 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.788869 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.794779 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-signing-cabundle\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.808111 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.819452 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-signing-key\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.829859 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.849250 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.853950 4846 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854020 4846 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854058 4846 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854028 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-profile-collector-cert podName:8b3271d0-d2f7-4fd6-81c3-a457840c1ab3 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.354008098 +0000 UTC m=+147.134777172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-profile-collector-cert") pod "catalog-operator-68c6474976-2j7th" (UID: "8b3271d0-d2f7-4fd6-81c3-a457840c1ab3") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854034 4846 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854145 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbb31493-49be-4d93-9da9-9e32b8ba0b99-serving-cert podName:bbb31493-49be-4d93-9da9-9e32b8ba0b99 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.354108452 +0000 UTC m=+147.134877576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bbb31493-49be-4d93-9da9-9e32b8ba0b99-serving-cert") pod "kube-controller-manager-operator-78b949d7b-xf98s" (UID: "bbb31493-49be-4d93-9da9-9e32b8ba0b99") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854189 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-srv-cert podName:8b3271d0-d2f7-4fd6-81c3-a457840c1ab3 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.354169274 +0000 UTC m=+147.134938428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-srv-cert") pod "catalog-operator-68c6474976-2j7th" (UID: "8b3271d0-d2f7-4fd6-81c3-a457840c1ab3") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.854222 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-profile-collector-cert podName:48ac6c9b-1a8b-4dee-9e40-26f918718b9d nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.354204996 +0000 UTC m=+147.134974180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-profile-collector-cert") pod "olm-operator-6b444d44fb-v4cjt" (UID: "48ac6c9b-1a8b-4dee-9e40-26f918718b9d") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.855944 4846 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.855993 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bbb31493-49be-4d93-9da9-9e32b8ba0b99-config podName:bbb31493-49be-4d93-9da9-9e32b8ba0b99 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.355983333 +0000 UTC m=+147.136752407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bbb31493-49be-4d93-9da9-9e32b8ba0b99-config") pod "kube-controller-manager-operator-78b949d7b-xf98s" (UID: "bbb31493-49be-4d93-9da9-9e32b8ba0b99") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.855951 4846 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.856032 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-srv-cert podName:48ac6c9b-1a8b-4dee-9e40-26f918718b9d nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.356021295 +0000 UTC m=+147.136790469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-srv-cert") pod "olm-operator-6b444d44fb-v4cjt" (UID: "48ac6c9b-1a8b-4dee-9e40-26f918718b9d") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.856063 4846 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.856089 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics podName:e31602df-d2bc-40de-93be-42600c22a9c1 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.356083117 +0000 UTC m=+147.136852191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics") pod "marketplace-operator-79b997595-6rhj6" (UID: "e31602df-d2bc-40de-93be-42600c22a9c1") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.856129 4846 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.856216 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca podName:e31602df-d2bc-40de-93be-42600c22a9c1 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.356194321 +0000 UTC m=+147.136963445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca") pod "marketplace-operator-79b997595-6rhj6" (UID: "e31602df-d2bc-40de-93be-42600c22a9c1") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.885414 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbtf\" (UniqueName: \"kubernetes.io/projected/aae52f6b-6c73-4cc2-a074-93a11abb9c98-kube-api-access-btbtf\") pod \"openshift-apiserver-operator-796bbdcf4f-68fpv\" (UID: \"aae52f6b-6c73-4cc2-a074-93a11abb9c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.902718 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84x5h\" (UniqueName: \"kubernetes.io/projected/8d4ee79b-8938-4fbb-b8fc-2d40a9808980-kube-api-access-84x5h\") pod \"authentication-operator-69f744f599-h9phn\" (UID: \"8d4ee79b-8938-4fbb-b8fc-2d40a9808980\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.908919 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.921585 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31064592-6043-412a-82e6-4eb313fa16a3-secret-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.928819 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.957336 4846 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.957395 4846 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.957432 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume podName:31064592-6043-412a-82e6-4eb313fa16a3 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.457410804 +0000 UTC m=+147.238179878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume") pod "collect-profiles-29409120-5nwps" (UID: "31064592-6043-412a-82e6-4eb313fa16a3") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.957465 4846 secret.go:188] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.957501 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38a0117-8d8d-4fba-973f-ce27e8fbf920-serving-cert podName:c38a0117-8d8d-4fba-973f-ce27e8fbf920 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.457493557 +0000 UTC m=+147.238262631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c38a0117-8d8d-4fba-973f-ce27e8fbf920-serving-cert") pod "kube-apiserver-operator-766d6c64bb-26vlv" (UID: "c38a0117-8d8d-4fba-973f-ce27e8fbf920") : failed to sync secret cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: E1201 00:08:45.957528 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c38a0117-8d8d-4fba-973f-ce27e8fbf920-config podName:c38a0117-8d8d-4fba-973f-ce27e8fbf920 nodeName:}" failed. No retries permitted until 2025-12-01 00:08:46.457507178 +0000 UTC m=+147.238276262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c38a0117-8d8d-4fba-973f-ce27e8fbf920-config") pod "kube-apiserver-operator-766d6c64bb-26vlv" (UID: "c38a0117-8d8d-4fba-973f-ce27e8fbf920") : failed to sync configmap cache: timed out waiting for the condition Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.968322 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.968984 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.972937 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmcs4\" (UniqueName: \"kubernetes.io/projected/60a3bc94-e170-4e44-a5d3-52d353845365-kube-api-access-vmcs4\") pod \"apiserver-7bbb656c7d-685b8\" (UID: \"60a3bc94-e170-4e44-a5d3-52d353845365\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.982937 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:45 crc kubenswrapper[4846]: I1201 00:08:45.995019 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.004623 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svl7w\" (UniqueName: \"kubernetes.io/projected/6d8ea8a6-45fc-461a-8ce6-f317ff37eac9-kube-api-access-svl7w\") pod \"machine-api-operator-5694c8668f-wlrts\" (UID: \"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.009113 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.037133 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.049087 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.069116 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.089228 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.108832 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.128929 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.149722 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.168378 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.189701 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.210017 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-h9phn"] Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.210826 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.228435 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.249572 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.255102 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.269240 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.291729 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.308865 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.333045 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.349981 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.369670 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.381938 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:46 crc kubenswrapper[4846]: E1201 00:08:46.382171 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:10:48.382137377 +0000 UTC m=+269.162906451 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382423 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382465 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb31493-49be-4d93-9da9-9e32b8ba0b99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382523 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-profile-collector-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382608 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-srv-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382630 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382736 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-srv-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382754 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382772 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382805 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb31493-49be-4d93-9da9-9e32b8ba0b99-config\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.382839 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.383928 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.387100 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbb31493-49be-4d93-9da9-9e32b8ba0b99-config\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.387166 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-srv-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.387376 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-profile-collector-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.388277 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.388347 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbb31493-49be-4d93-9da9-9e32b8ba0b99-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.389502 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.390535 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-srv-cert\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.390581 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" event={"ID":"8d4ee79b-8938-4fbb-b8fc-2d40a9808980","Type":"ContainerStarted","Data":"6e7c65c22260b530fcf640272f0356dce83c6d63f97828bb0884079832eb2258"} Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.391131 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.409089 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.409839 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.410319 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.428889 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.432888 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wlrts"] Dec 01 00:08:46 crc kubenswrapper[4846]: W1201 00:08:46.441463 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8ea8a6_45fc_461a_8ce6_f317ff37eac9.slice/crio-fd58565a1938905559dacdd68f70cd475f4d2e39c5856367a26286d53e1f35a0 WatchSource:0}: Error finding container fd58565a1938905559dacdd68f70cd475f4d2e39c5856367a26286d53e1f35a0: Status 404 returned error can't find the container with id fd58565a1938905559dacdd68f70cd475f4d2e39c5856367a26286d53e1f35a0 Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.448340 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.454581 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv"] Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.459150 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8"] Dec 01 00:08:46 crc kubenswrapper[4846]: W1201 00:08:46.463466 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae52f6b_6c73_4cc2_a074_93a11abb9c98.slice/crio-2b251cf6497d1382ad45a230dfb70d12f7754022b83680a571b634d54b2b4bd0 WatchSource:0}: Error finding container 2b251cf6497d1382ad45a230dfb70d12f7754022b83680a571b634d54b2b4bd0: Status 404 returned error can't find the container with id 2b251cf6497d1382ad45a230dfb70d12f7754022b83680a571b634d54b2b4bd0 Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.471854 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.485989 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.486049 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.486192 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38a0117-8d8d-4fba-973f-ce27e8fbf920-config\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.486268 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c38a0117-8d8d-4fba-973f-ce27e8fbf920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.486290 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.487266 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.487493 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38a0117-8d8d-4fba-973f-ce27e8fbf920-config\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.489445 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.489794 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.490298 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c38a0117-8d8d-4fba-973f-ce27e8fbf920-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.490975 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.495286 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.502715 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.509646 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.513433 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.529945 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.550100 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.588765 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsc2k\" (UniqueName: \"kubernetes.io/projected/245b49c8-86a4-4b49-83d5-c915905958a3-kube-api-access-fsc2k\") pod \"migrator-59844c95c7-l6szg\" (UID: \"245b49c8-86a4-4b49-83d5-c915905958a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.607191 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6jrm\" (UniqueName: \"kubernetes.io/projected/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-kube-api-access-b6jrm\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.607265 4846 request.go:700] Waited for 1.857107233s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.628403 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhlgt\" (UniqueName: \"kubernetes.io/projected/06c91556-9f39-425b-a247-d830eba2643c-kube-api-access-nhlgt\") pod \"openshift-config-operator-7777fb866f-lsjh9\" (UID: \"06c91556-9f39-425b-a247-d830eba2643c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.647514 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6fj\" (UniqueName: \"kubernetes.io/projected/e342b46d-339c-4903-b2ca-46ee21ba99aa-kube-api-access-5g6fj\") pod \"router-default-5444994796-wwxb2\" (UID: \"e342b46d-339c-4903-b2ca-46ee21ba99aa\") " pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.667071 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ks7z\" (UniqueName: \"kubernetes.io/projected/c92de2dd-b856-4062-a258-af0e14f84942-kube-api-access-9ks7z\") pod \"machine-approver-56656f9798-kttfm\" (UID: \"c92de2dd-b856-4062-a258-af0e14f84942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.693058 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2cba7d5-652e-4c41-80a0-5477f682832f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.711025 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7mfg\" (UniqueName: \"kubernetes.io/projected/a2cba7d5-652e-4c41-80a0-5477f682832f-kube-api-access-l7mfg\") pod \"ingress-operator-5b745b69d9-gwwrs\" (UID: \"a2cba7d5-652e-4c41-80a0-5477f682832f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.742175 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8768\" (UniqueName: \"kubernetes.io/projected/b5a36407-b124-4956-b91c-3be1a6cfa4b3-kube-api-access-x8768\") pod \"console-f9d7485db-pfkf6\" (UID: \"b5a36407-b124-4956-b91c-3be1a6cfa4b3\") " pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.747172 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tkcx\" (UniqueName: \"kubernetes.io/projected/f2f89446-c3e7-45dc-9170-c954ffa8c445-kube-api-access-4tkcx\") pod \"console-operator-58897d9998-67bnn\" (UID: \"f2f89446-c3e7-45dc-9170-c954ffa8c445\") " pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.769363 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg5sc\" (UniqueName: \"kubernetes.io/projected/3c17b590-902c-4863-823c-865652c475c0-kube-api-access-gg5sc\") pod \"controller-manager-879f6c89f-hbmbg\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:46 crc kubenswrapper[4846]: W1201 00:08:46.778297 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-e6bfae94afcc1cf6b6e44be27e0872aafe90d40e7008a6ad8a186774b6d2911d WatchSource:0}: Error finding container e6bfae94afcc1cf6b6e44be27e0872aafe90d40e7008a6ad8a186774b6d2911d: Status 404 returned error can't find the container with id e6bfae94afcc1cf6b6e44be27e0872aafe90d40e7008a6ad8a186774b6d2911d Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.788427 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t824g\" (UniqueName: \"kubernetes.io/projected/608d1635-c6ea-474a-9a40-99196daa0ae0-kube-api-access-t824g\") pod \"apiserver-76f77b778f-tq72z\" (UID: \"608d1635-c6ea-474a-9a40-99196daa0ae0\") " pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.796532 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.808530 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.816636 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bt8n\" (UniqueName: \"kubernetes.io/projected/90578ce0-0758-4827-bc5b-d1d8ca39148e-kube-api-access-5bt8n\") pod \"downloads-7954f5f757-557lc\" (UID: \"90578ce0-0758-4827-bc5b-d1d8ca39148e\") " pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.816637 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.828858 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw729\" (UniqueName: \"kubernetes.io/projected/83ef7482-dbe0-429c-8cc6-d3dbef3768fb-kube-api-access-xw729\") pod \"cluster-samples-operator-665b6dd947-nhgcq\" (UID: \"83ef7482-dbe0-429c-8cc6-d3dbef3768fb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.832427 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.839797 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.843087 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgdk\" (UniqueName: \"kubernetes.io/projected/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-kube-api-access-vpgdk\") pod \"route-controller-manager-6576b87f9c-pzb9q\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.847915 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.863855 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlm8s\" (UniqueName: \"kubernetes.io/projected/afeb8cab-bf35-4a58-9dbb-22a1d3238b7f-kube-api-access-dlm8s\") pod \"etcd-operator-b45778765-g2cv9\" (UID: \"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.883778 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z99k\" (UniqueName: \"kubernetes.io/projected/e969ab94-0cbc-487b-944e-b8b18e633127-kube-api-access-9z99k\") pod \"image-pruner-29409120-st7sv\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.894547 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.902297 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/896b2f55-d0aa-4b5c-9d7d-7d814a42fc53-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-s4kv4\" (UID: \"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.902661 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.918472 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.923906 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbr6\" (UniqueName: \"kubernetes.io/projected/2a047a1e-0e7f-474d-8026-71f3cb40d657-kube-api-access-cmbr6\") pod \"control-plane-machine-set-operator-78cbb6b69f-qjvv5\" (UID: \"2a047a1e-0e7f-474d-8026-71f3cb40d657\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.932024 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.948347 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.949100 4846 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.965043 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.969098 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.985058 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:46 crc kubenswrapper[4846]: I1201 00:08:46.989434 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.023405 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d087dc6-2a31-4cae-88e2-283242b45f38-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rklqk\" (UID: \"6d087dc6-2a31-4cae-88e2-283242b45f38\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.038622 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.042541 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7766s\" (UniqueName: \"kubernetes.io/projected/3fea9ced-4e4c-4e57-a190-bb3fa41140f3-kube-api-access-7766s\") pod \"service-ca-operator-777779d784-st4n7\" (UID: \"3fea9ced-4e4c-4e57-a190-bb3fa41140f3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.065958 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trr5\" (UniqueName: \"kubernetes.io/projected/435c27d7-a826-4c55-af67-f0cb995b4447-kube-api-access-7trr5\") pod \"dns-operator-744455d44c-k87xr\" (UID: \"435c27d7-a826-4c55-af67-f0cb995b4447\") " pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.084331 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv55t\" (UniqueName: \"kubernetes.io/projected/0a045872-6e5e-4a32-a002-af9b49d7be80-kube-api-access-fv55t\") pod \"machine-config-controller-84d6567774-4mzv7\" (UID: \"0a045872-6e5e-4a32-a002-af9b49d7be80\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.103470 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xmj\" (UniqueName: \"kubernetes.io/projected/cddf3e49-1d9d-4ede-9823-4e92a2392585-kube-api-access-62xmj\") pod \"packageserver-d55dfcdfc-qjd4q\" (UID: \"cddf3e49-1d9d-4ede-9823-4e92a2392585\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.123885 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfgq4\" (UniqueName: \"kubernetes.io/projected/8b3271d0-d2f7-4fd6-81c3-a457840c1ab3-kube-api-access-lfgq4\") pod \"catalog-operator-68c6474976-2j7th\" (UID: \"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.126496 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.141653 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmmd\" (UniqueName: \"kubernetes.io/projected/6d1d01af-6f68-43d0-8b5b-3965fb3e03db-kube-api-access-kqmmd\") pod \"service-ca-9c57cc56f-mn4nj\" (UID: \"6d1d01af-6f68-43d0-8b5b-3965fb3e03db\") " pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.165057 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2bz\" (UniqueName: \"kubernetes.io/projected/e31602df-d2bc-40de-93be-42600c22a9c1-kube-api-access-ws2bz\") pod \"marketplace-operator-79b997595-6rhj6\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.182868 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cp6z\" (UniqueName: \"kubernetes.io/projected/b7519ac9-b09f-4169-bf4d-b6ec5849661c-kube-api-access-7cp6z\") pod \"oauth-openshift-558db77b4-lq6bl\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.202828 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbb31493-49be-4d93-9da9-9e32b8ba0b99-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xf98s\" (UID: \"bbb31493-49be-4d93-9da9-9e32b8ba0b99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.210220 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.222795 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2stl\" (UniqueName: \"kubernetes.io/projected/eaeebea5-554d-4f4a-bcda-e21ccea7f8e0-kube-api-access-z2stl\") pod \"kube-storage-version-migrator-operator-b67b599dd-bls6r\" (UID: \"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.231942 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.239520 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.242542 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbzc\" (UniqueName: \"kubernetes.io/projected/aa818711-83ce-4c93-b048-ef40a01fdb04-kube-api-access-8pbzc\") pod \"openshift-controller-manager-operator-756b6f6bc6-vnq48\" (UID: \"aa818711-83ce-4c93-b048-ef40a01fdb04\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.245998 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.263083 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.263537 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckzrb\" (UniqueName: \"kubernetes.io/projected/3367d1d5-e9eb-44ad-a1a0-27fba7276c74-kube-api-access-ckzrb\") pod \"package-server-manager-789f6589d5-ct4bd\" (UID: \"3367d1d5-e9eb-44ad-a1a0-27fba7276c74\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.273502 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.280153 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.282893 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sck4l\" (UniqueName: \"kubernetes.io/projected/0386d75d-f624-4e2d-a804-2a9abaec1f71-kube-api-access-sck4l\") pod \"multus-admission-controller-857f4d67dd-lgjgk\" (UID: \"0386d75d-f624-4e2d-a804-2a9abaec1f71\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.285646 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.293601 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.299402 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.301442 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nhw9\" (UniqueName: \"kubernetes.io/projected/8c9bfccc-0b80-41f2-84da-512482ad568a-kube-api-access-9nhw9\") pod \"machine-config-operator-74547568cd-cbn6d\" (UID: \"8c9bfccc-0b80-41f2-84da-512482ad568a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.307363 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.318520 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.321941 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw2m8\" (UniqueName: \"kubernetes.io/projected/48ac6c9b-1a8b-4dee-9e40-26f918718b9d-kube-api-access-kw2m8\") pod \"olm-operator-6b444d44fb-v4cjt\" (UID: \"48ac6c9b-1a8b-4dee-9e40-26f918718b9d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.327670 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.336394 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.343037 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.344561 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8z6\" (UniqueName: \"kubernetes.io/projected/31064592-6043-412a-82e6-4eb313fa16a3-kube-api-access-cq8z6\") pod \"collect-profiles-29409120-5nwps\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.353609 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.362344 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c38a0117-8d8d-4fba-973f-ce27e8fbf920-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-26vlv\" (UID: \"c38a0117-8d8d-4fba-973f-ce27e8fbf920\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.380671 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.397534 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" event={"ID":"aae52f6b-6c73-4cc2-a074-93a11abb9c98","Type":"ContainerStarted","Data":"fd3c00d675b945b0871079cd94ad24ecb8b334c181157fae80b02a2408bfcc4b"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.397598 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" event={"ID":"aae52f6b-6c73-4cc2-a074-93a11abb9c98","Type":"ContainerStarted","Data":"2b251cf6497d1382ad45a230dfb70d12f7754022b83680a571b634d54b2b4bd0"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.400019 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" event={"ID":"8d4ee79b-8938-4fbb-b8fc-2d40a9808980","Type":"ContainerStarted","Data":"16bc426ef90e512460655d58356358dc027c80b53d25296545861ed076d7a6df"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.401101 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e6bfae94afcc1cf6b6e44be27e0872aafe90d40e7008a6ad8a186774b6d2911d"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.402366 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" event={"ID":"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9","Type":"ContainerStarted","Data":"0c5d6a53d30bba2938c172f6d92c30edc70aa6d5d39d8a7110d2ad60b9add728"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.402433 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" event={"ID":"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9","Type":"ContainerStarted","Data":"fd58565a1938905559dacdd68f70cd475f4d2e39c5856367a26286d53e1f35a0"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.403299 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5e41d83e78620e3b32f832389e83eae3e340bcf9e650dca96f8ea5e4e09c2742"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.404143 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1ddf7bc20718bc7b59258a06a1c7a231fd67630b60744f5a6bdd5dbff79aee33"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.405136 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" event={"ID":"60a3bc94-e170-4e44-a5d3-52d353845365","Type":"ContainerStarted","Data":"771b5117ef015e7b6f67ace4380a3b6df3dc59d79e7209542cc8a910279585b8"} Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.595135 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.595669 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.599230 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-trusted-ca\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.599760 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-tls\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.600325 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: E1201 00:08:47.600740 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.100714883 +0000 UTC m=+148.881484007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.600819 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ae23581-006a-44dd-aae2-d85d847dda2e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.600962 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ae23581-006a-44dd-aae2-d85d847dda2e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.602629 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-certificates\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.602683 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwhh\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-kube-api-access-dvwhh\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.604596 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-bound-sa-token\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: W1201 00:08:47.618843 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92de2dd_b856_4062_a258_af0e14f84942.slice/crio-b72a5c6065ec8ff6836c05055ae4f27adbdeb9795ce90f4e412d8ce96366310d WatchSource:0}: Error finding container b72a5c6065ec8ff6836c05055ae4f27adbdeb9795ce90f4e412d8ce96366310d: Status 404 returned error can't find the container with id b72a5c6065ec8ff6836c05055ae4f27adbdeb9795ce90f4e412d8ce96366310d Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707039 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707456 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-plugins-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707544 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b2b99353-18b2-4200-a85f-6b463bbed78b-certs\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707627 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhk6f\" (UniqueName: \"kubernetes.io/projected/b2b99353-18b2-4200-a85f-6b463bbed78b-kube-api-access-mhk6f\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707657 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzd2v\" (UniqueName: \"kubernetes.io/projected/8f77b310-1408-4832-966f-396fbf5c2aa9-kube-api-access-kzd2v\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707744 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ae23581-006a-44dd-aae2-d85d847dda2e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707820 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ae23581-006a-44dd-aae2-d85d847dda2e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.707855 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9sx4\" (UniqueName: \"kubernetes.io/projected/02015561-01e9-4645-be51-e5da33595503-kube-api-access-h9sx4\") pod \"ingress-canary-cvl5d\" (UID: \"02015561-01e9-4645-be51-e5da33595503\") " pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.708034 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-certificates\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.708132 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvwhh\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-kube-api-access-dvwhh\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: E1201 00:08:47.708322 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.208288057 +0000 UTC m=+148.989057141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.708441 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-socket-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.711303 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-certificates\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.714619 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ae23581-006a-44dd-aae2-d85d847dda2e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.718309 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ae23581-006a-44dd-aae2-d85d847dda2e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.708764 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-mountpoint-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.737545 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-registration-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.737597 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-bound-sa-token\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.737645 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-trusted-ca\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.737720 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-metrics-tls\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.737979 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4bjb\" (UniqueName: \"kubernetes.io/projected/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-kube-api-access-n4bjb\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.738014 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02015561-01e9-4645-be51-e5da33595503-cert\") pod \"ingress-canary-cvl5d\" (UID: \"02015561-01e9-4645-be51-e5da33595503\") " pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.738089 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-csi-data-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.738199 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b2b99353-18b2-4200-a85f-6b463bbed78b-node-bootstrap-token\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.738312 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-tls\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.738354 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-config-volume\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.750989 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-trusted-ca\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.752001 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-tls\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.752668 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvwhh\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-kube-api-access-dvwhh\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.782883 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-bound-sa-token\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.837965 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-67bnn"] Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839502 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b2b99353-18b2-4200-a85f-6b463bbed78b-node-bootstrap-token\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839543 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-config-volume\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839570 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839598 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-plugins-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839619 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b2b99353-18b2-4200-a85f-6b463bbed78b-certs\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839642 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhk6f\" (UniqueName: \"kubernetes.io/projected/b2b99353-18b2-4200-a85f-6b463bbed78b-kube-api-access-mhk6f\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839661 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzd2v\" (UniqueName: \"kubernetes.io/projected/8f77b310-1408-4832-966f-396fbf5c2aa9-kube-api-access-kzd2v\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839706 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9sx4\" (UniqueName: \"kubernetes.io/projected/02015561-01e9-4645-be51-e5da33595503-kube-api-access-h9sx4\") pod \"ingress-canary-cvl5d\" (UID: \"02015561-01e9-4645-be51-e5da33595503\") " pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839763 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-socket-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839790 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-mountpoint-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839817 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-registration-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839841 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-metrics-tls\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839869 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4bjb\" (UniqueName: \"kubernetes.io/projected/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-kube-api-access-n4bjb\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839889 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02015561-01e9-4645-be51-e5da33595503-cert\") pod \"ingress-canary-cvl5d\" (UID: \"02015561-01e9-4645-be51-e5da33595503\") " pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.839908 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-csi-data-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.840014 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-csi-data-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.840230 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-config-volume\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.840251 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-socket-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.840285 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-plugins-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.840320 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-mountpoint-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.840357 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f77b310-1408-4832-966f-396fbf5c2aa9-registration-dir\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: E1201 00:08:47.840465 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.340450264 +0000 UTC m=+149.121219328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.843899 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02015561-01e9-4645-be51-e5da33595503-cert\") pod \"ingress-canary-cvl5d\" (UID: \"02015561-01e9-4645-be51-e5da33595503\") " pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.854667 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-metrics-tls\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.863443 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9sx4\" (UniqueName: \"kubernetes.io/projected/02015561-01e9-4645-be51-e5da33595503-kube-api-access-h9sx4\") pod \"ingress-canary-cvl5d\" (UID: \"02015561-01e9-4645-be51-e5da33595503\") " pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.902526 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzd2v\" (UniqueName: \"kubernetes.io/projected/8f77b310-1408-4832-966f-396fbf5c2aa9-kube-api-access-kzd2v\") pod \"csi-hostpathplugin-nq2mw\" (UID: \"8f77b310-1408-4832-966f-396fbf5c2aa9\") " pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.922557 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4bjb\" (UniqueName: \"kubernetes.io/projected/5c778133-16f9-4ac3-b9ae-6e93df7d1e0c-kube-api-access-n4bjb\") pod \"dns-default-6fqvb\" (UID: \"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c\") " pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.940770 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:47 crc kubenswrapper[4846]: E1201 00:08:47.941108 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.441094436 +0000 UTC m=+149.221863510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.955311 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b2b99353-18b2-4200-a85f-6b463bbed78b-certs\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.955559 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b2b99353-18b2-4200-a85f-6b463bbed78b-node-bootstrap-token\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.959598 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhk6f\" (UniqueName: \"kubernetes.io/projected/b2b99353-18b2-4200-a85f-6b463bbed78b-kube-api-access-mhk6f\") pod \"machine-config-server-kgx79\" (UID: \"b2b99353-18b2-4200-a85f-6b463bbed78b\") " pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.995127 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:47 crc kubenswrapper[4846]: I1201 00:08:47.997981 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cvl5d" Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.007499 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kgx79" Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.040959 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.041987 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.042368 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.542347922 +0000 UTC m=+149.323116996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: W1201 00:08:48.048288 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f89446_c3e7_45dc_9170_c954ffa8c445.slice/crio-792e85010ca18e4a64cb5feb527042e6398a548112f4d32c86e66c8b022ced6b WatchSource:0}: Error finding container 792e85010ca18e4a64cb5feb527042e6398a548112f4d32c86e66c8b022ced6b: Status 404 returned error can't find the container with id 792e85010ca18e4a64cb5feb527042e6398a548112f4d32c86e66c8b022ced6b Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.142834 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.143022 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.642980553 +0000 UTC m=+149.423749627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.143377 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.143704 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.64367611 +0000 UTC m=+149.424445184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.159521 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-h9phn" podStartSLOduration=124.159501314 podStartE2EDuration="2m4.159501314s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:48.129456955 +0000 UTC m=+148.910226029" watchObservedRunningTime="2025-12-01 00:08:48.159501314 +0000 UTC m=+148.940270388" Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.160833 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5"] Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.204221 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pfkf6"] Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.248437 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.248872 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.748850682 +0000 UTC m=+149.529619766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.333237 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg"] Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.351114 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.351464 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.851451167 +0000 UTC m=+149.632220251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.416499 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-67bnn" event={"ID":"f2f89446-c3e7-45dc-9170-c954ffa8c445","Type":"ContainerStarted","Data":"792e85010ca18e4a64cb5feb527042e6398a548112f4d32c86e66c8b022ced6b"} Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.418962 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" event={"ID":"c92de2dd-b856-4062-a258-af0e14f84942","Type":"ContainerStarted","Data":"b72a5c6065ec8ff6836c05055ae4f27adbdeb9795ce90f4e412d8ce96366310d"} Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.453235 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.453743 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:48.953722572 +0000 UTC m=+149.734491646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: W1201 00:08:48.473350 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a047a1e_0e7f_474d_8026_71f3cb40d657.slice/crio-951980927c8b64947bb32ec27281fcefb17f89bba14f7bb572ca972335e70e27 WatchSource:0}: Error finding container 951980927c8b64947bb32ec27281fcefb17f89bba14f7bb572ca972335e70e27: Status 404 returned error can't find the container with id 951980927c8b64947bb32ec27281fcefb17f89bba14f7bb572ca972335e70e27 Dec 01 00:08:48 crc kubenswrapper[4846]: W1201 00:08:48.485000 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5a36407_b124_4956_b91c_3be1a6cfa4b3.slice/crio-a79dd4763e4700f1bc0bc265ed5115b1b6b0f27814e881066f180ade83f01f6e WatchSource:0}: Error finding container a79dd4763e4700f1bc0bc265ed5115b1b6b0f27814e881066f180ade83f01f6e: Status 404 returned error can't find the container with id a79dd4763e4700f1bc0bc265ed5115b1b6b0f27814e881066f180ade83f01f6e Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.557033 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.558309 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.058297921 +0000 UTC m=+149.839066995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.679216 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.681223 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.181200401 +0000 UTC m=+149.961969475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.679489 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7"] Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.683179 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.684322 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.184302957 +0000 UTC m=+149.965072031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.714197 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt"] Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.795341 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.798007 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.297981319 +0000 UTC m=+150.078750393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:48 crc kubenswrapper[4846]: I1201 00:08:48.899302 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:48 crc kubenswrapper[4846]: E1201 00:08:48.899742 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.399725862 +0000 UTC m=+150.180494986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.005034 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.005215 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.505185807 +0000 UTC m=+150.285954871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.006244 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.006534 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.506521636 +0000 UTC m=+150.287290710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.067635 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.084860 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-68fpv" podStartSLOduration=125.08484107 podStartE2EDuration="2m5.08484107s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:49.084671723 +0000 UTC m=+149.865440797" watchObservedRunningTime="2025-12-01 00:08:49.08484107 +0000 UTC m=+149.865610144" Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.107601 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.108225 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.608208378 +0000 UTC m=+150.388977452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.209488 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.224109 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.724089924 +0000 UTC m=+150.504858998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.310460 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.310896 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.810882465 +0000 UTC m=+150.591651539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.412903 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.413279 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:49.913264663 +0000 UTC m=+150.694033737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.425098 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kgx79" event={"ID":"b2b99353-18b2-4200-a85f-6b463bbed78b","Type":"ContainerStarted","Data":"a969e897022dff5468cfbae61d9d8c51055036b3e467d9352f8549f0ede8d296"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.427950 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wwxb2" event={"ID":"e342b46d-339c-4903-b2ca-46ee21ba99aa","Type":"ContainerStarted","Data":"254e21b5238cb90c291705216855b6646f8e282aaba1751ff6505a1d32a3c271"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.429996 4846 generic.go:334] "Generic (PLEG): container finished" podID="60a3bc94-e170-4e44-a5d3-52d353845365" containerID="646dee2b650fe3a3e814e782d2f6162436430431241fc346edacc1b8d5a05239" exitCode=0 Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.430062 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" event={"ID":"60a3bc94-e170-4e44-a5d3-52d353845365","Type":"ContainerDied","Data":"646dee2b650fe3a3e814e782d2f6162436430431241fc346edacc1b8d5a05239"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.431195 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pfkf6" event={"ID":"b5a36407-b124-4956-b91c-3be1a6cfa4b3","Type":"ContainerStarted","Data":"a79dd4763e4700f1bc0bc265ed5115b1b6b0f27814e881066f180ade83f01f6e"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.434417 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" event={"ID":"6d8ea8a6-45fc-461a-8ce6-f317ff37eac9","Type":"ContainerStarted","Data":"a72c3b70a6a64b921c46c4fec60b572777a65e1c68152e6695358c7120ad4b7a"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.435110 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" event={"ID":"8c9bfccc-0b80-41f2-84da-512482ad568a","Type":"ContainerStarted","Data":"589ab295cb14fcc51fa13ca0d046b5ea0365662b3a59414e4878b6e32cd48404"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.435793 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" event={"ID":"2a047a1e-0e7f-474d-8026-71f3cb40d657","Type":"ContainerStarted","Data":"951980927c8b64947bb32ec27281fcefb17f89bba14f7bb572ca972335e70e27"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.436492 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" event={"ID":"48ac6c9b-1a8b-4dee-9e40-26f918718b9d","Type":"ContainerStarted","Data":"e34da48118250120088ce0f2bffb95c1309802966e5dc54b07edac9fabec4814"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.437571 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"56c138819121a46212935cc4f062945ac038516605376c04999926f6a181ad6f"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.438880 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"19fba52114317f89fac5b46416f4ca196ed725484266ff00c0f960897ee28d7d"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.440164 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" event={"ID":"0a045872-6e5e-4a32-a002-af9b49d7be80","Type":"ContainerStarted","Data":"bdad398d31e27d966baa11de09704e4b9a0075bdd333642244333ed2faf70c4d"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.440991 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" event={"ID":"245b49c8-86a4-4b49-83d5-c915905958a3","Type":"ContainerStarted","Data":"cf4b6a27abde7457bc29685c287b36506163e618573472ab9f2b1a0680cbe262"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.442103 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dcb31ea59513f9e4e8bcfdda064951c277a50af6b400240f8ea506037557560e"} Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.523246 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.523379 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.023359061 +0000 UTC m=+150.804128135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.523463 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.523872 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.02386097 +0000 UTC m=+150.804630044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.526171 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mn4nj"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.536402 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48"] Dec 01 00:08:49 crc kubenswrapper[4846]: W1201 00:08:49.547451 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa818711_83ce_4c93_b048_ef40a01fdb04.slice/crio-3f77c2cc169ad7beaa69c923e051dabcb416b695b5c1debb725f758a64920405 WatchSource:0}: Error finding container 3f77c2cc169ad7beaa69c923e051dabcb416b695b5c1debb725f758a64920405: Status 404 returned error can't find the container with id 3f77c2cc169ad7beaa69c923e051dabcb416b695b5c1debb725f758a64920405 Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.564702 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-st4n7"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.624491 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.625023 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.124973429 +0000 UTC m=+150.905742503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.648504 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tq72z"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.648544 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.648557 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.719928 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hbmbg"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.726143 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.726233 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-557lc"] Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.726756 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.226740154 +0000 UTC m=+151.007509228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.730910 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.734123 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.737143 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-g2cv9"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.740364 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.775346 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.829621 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.830536 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.330514054 +0000 UTC m=+151.111283128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.832288 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nq2mw"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.859772 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.868481 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lq6bl"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.876157 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29409120-st7sv"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.880941 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.888292 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6fqvb"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.903962 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lgjgk"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.931237 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:49 crc kubenswrapper[4846]: E1201 00:08:49.931667 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.431637044 +0000 UTC m=+151.212406118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.981933 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6rhj6"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.984339 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps"] Dec 01 00:08:49 crc kubenswrapper[4846]: I1201 00:08:49.988926 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4"] Dec 01 00:08:49 crc kubenswrapper[4846]: W1201 00:08:49.999044 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f77b310_1408_4832_966f_396fbf5c2aa9.slice/crio-a641af27e6e98bec17317bdae22d761d3aab25fd0e15eb86c2013eff9aa61aa2 WatchSource:0}: Error finding container a641af27e6e98bec17317bdae22d761d3aab25fd0e15eb86c2013eff9aa61aa2: Status 404 returned error can't find the container with id a641af27e6e98bec17317bdae22d761d3aab25fd0e15eb86c2013eff9aa61aa2 Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.001674 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r"] Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.010708 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c778133_16f9_4ac3_b9ae_6e93df7d1e0c.slice/crio-6ff3e679e1b86087c7120643eeb1d2ea554483b8cc38b635e8312623e9761b83 WatchSource:0}: Error finding container 6ff3e679e1b86087c7120643eeb1d2ea554483b8cc38b635e8312623e9761b83: Status 404 returned error can't find the container with id 6ff3e679e1b86087c7120643eeb1d2ea554483b8cc38b635e8312623e9761b83 Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.011124 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode969ab94_0cbc_487b_944e_b8b18e633127.slice/crio-d3c3a4003e00fbcdc4fbb77fd3508060ce36fc77890b1f632387c0d3b7980f14 WatchSource:0}: Error finding container d3c3a4003e00fbcdc4fbb77fd3508060ce36fc77890b1f632387c0d3b7980f14: Status 404 returned error can't find the container with id d3c3a4003e00fbcdc4fbb77fd3508060ce36fc77890b1f632387c0d3b7980f14 Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.016480 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbb31493_49be_4d93_9da9_9e32b8ba0b99.slice/crio-23254386608cf68fe82701318e2e45a88084e341673dbb817d47973dd2330712 WatchSource:0}: Error finding container 23254386608cf68fe82701318e2e45a88084e341673dbb817d47973dd2330712: Status 404 returned error can't find the container with id 23254386608cf68fe82701318e2e45a88084e341673dbb817d47973dd2330712 Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.018815 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0386d75d_f624_4e2d_a804_2a9abaec1f71.slice/crio-dba882368d88df4a4d2a5fc9e35fd2ec6be74febb2aee603c2cdf719d5b2401e WatchSource:0}: Error finding container dba882368d88df4a4d2a5fc9e35fd2ec6be74febb2aee603c2cdf719d5b2401e: Status 404 returned error can't find the container with id dba882368d88df4a4d2a5fc9e35fd2ec6be74febb2aee603c2cdf719d5b2401e Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.020002 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q"] Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.023468 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31064592_6043_412a_82e6_4eb313fa16a3.slice/crio-c063cb3444811a36ee1907c52e8c74f7af11406d4d2b604b536b11f982ba4344 WatchSource:0}: Error finding container c063cb3444811a36ee1907c52e8c74f7af11406d4d2b604b536b11f982ba4344: Status 404 returned error can't find the container with id c063cb3444811a36ee1907c52e8c74f7af11406d4d2b604b536b11f982ba4344 Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.028387 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod896b2f55_d0aa_4b5c_9d7d_7d814a42fc53.slice/crio-3d4b93d51cb0666defbcd364ecdd7959bcfbb88762c954e76eded3941b8e02d5 WatchSource:0}: Error finding container 3d4b93d51cb0666defbcd364ecdd7959bcfbb88762c954e76eded3941b8e02d5: Status 404 returned error can't find the container with id 3d4b93d51cb0666defbcd364ecdd7959bcfbb88762c954e76eded3941b8e02d5 Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.030005 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaeebea5_554d_4f4a_bcda_e21ccea7f8e0.slice/crio-891069ac358fa0c35a7e1dbc1a397d88ace372c81c2a23fe5187d994bf222939 WatchSource:0}: Error finding container 891069ac358fa0c35a7e1dbc1a397d88ace372c81c2a23fe5187d994bf222939: Status 404 returned error can't find the container with id 891069ac358fa0c35a7e1dbc1a397d88ace372c81c2a23fe5187d994bf222939 Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.030386 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.032601 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.032737 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.532723253 +0000 UTC m=+151.313492327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.033283 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.533275094 +0000 UTC m=+151.314044168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.032989 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: W1201 00:08:50.034516 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcddf3e49_1d9d_4ede_9823_4e92a2392585.slice/crio-f2861c29f22b3ee3a1baed5950f368831366426950c2d71b93ced0343eb5261d WatchSource:0}: Error finding container f2861c29f22b3ee3a1baed5950f368831366426950c2d71b93ced0343eb5261d: Status 404 returned error can't find the container with id f2861c29f22b3ee3a1baed5950f368831366426950c2d71b93ced0343eb5261d Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.043926 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs"] Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.059059 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cvl5d"] Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.081976 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-k87xr"] Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.138450 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.138763 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.638745728 +0000 UTC m=+151.419514802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.138870 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.139146 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.639136683 +0000 UTC m=+151.419905757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.240131 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.240931 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.740912097 +0000 UTC m=+151.521681171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.343592 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.344050 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.844035563 +0000 UTC m=+151.624804637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.444410 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.444585 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.944554821 +0000 UTC m=+151.725323895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.444942 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.445250 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:50.945240246 +0000 UTC m=+151.726009320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.457988 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" event={"ID":"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3","Type":"ContainerStarted","Data":"6d9e3484c8ef3d8a9fd224df64bd512f1dc60672e13fae7366ac7353c4dcbd93"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.459495 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" event={"ID":"3c17b590-902c-4863-823c-865652c475c0","Type":"ContainerStarted","Data":"72a65eb362cf2a6c516ecb1cc702f1d5f393de34d6ed6b59352b4ae4a24425bb"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.459526 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" event={"ID":"3c17b590-902c-4863-823c-865652c475c0","Type":"ContainerStarted","Data":"4f50c55e103bb39a94a2820d57060dc4b8d4a46ec7c60f130f0a11e9fdf2eae9"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.460489 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.462267 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" event={"ID":"3fea9ced-4e4c-4e57-a190-bb3fa41140f3","Type":"ContainerStarted","Data":"714ecffbaa05d8fb9b249751f9837ac803d39cb630fe43b914135a37ec8b1250"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.462304 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" event={"ID":"3fea9ced-4e4c-4e57-a190-bb3fa41140f3","Type":"ContainerStarted","Data":"53587eef7b6e7afdcdae7b34005cfb8dd6e58f4af0961c44447079e5f938baa6"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.464849 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" event={"ID":"8c9bfccc-0b80-41f2-84da-512482ad568a","Type":"ContainerStarted","Data":"d145c3f951930cac169056339867f82d2ecae0f34e9107b7071cf8beaff06ca1"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.465323 4846 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hbmbg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.465359 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" podUID="3c17b590-902c-4863-823c-865652c475c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.465864 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" event={"ID":"31064592-6043-412a-82e6-4eb313fa16a3","Type":"ContainerStarted","Data":"c063cb3444811a36ee1907c52e8c74f7af11406d4d2b604b536b11f982ba4344"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.468139 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" event={"ID":"6d1d01af-6f68-43d0-8b5b-3965fb3e03db","Type":"ContainerStarted","Data":"ed3df8e6d136bc69c71125b8343fc03c54240c2d3485286ac2e6c3fd107995c8"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.468174 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" event={"ID":"6d1d01af-6f68-43d0-8b5b-3965fb3e03db","Type":"ContainerStarted","Data":"adb7176653746a36f21dd93e5dfacc0ba2b82924850db127f124c2f2bcdf9164"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.470223 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" event={"ID":"c92de2dd-b856-4062-a258-af0e14f84942","Type":"ContainerStarted","Data":"be2adaa240114b4c03b9ac2f3ef6a827b8f464d109ea75af145af47467281355"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.475210 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" event={"ID":"c38a0117-8d8d-4fba-973f-ce27e8fbf920","Type":"ContainerStarted","Data":"0438f8852d20a6a7771b14d466940e47e6c0ee97aeaa975685b1810f66da8761"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.476210 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" event={"ID":"06c91556-9f39-425b-a247-d830eba2643c","Type":"ContainerStarted","Data":"653c8e310816e62bad05b3a187efe20f35a90a83eba4d0e41446f8dc482f53f1"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.477605 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" event={"ID":"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0","Type":"ContainerStarted","Data":"891069ac358fa0c35a7e1dbc1a397d88ace372c81c2a23fe5187d994bf222939"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.478730 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pfkf6" event={"ID":"b5a36407-b124-4956-b91c-3be1a6cfa4b3","Type":"ContainerStarted","Data":"055566d8f9c08b593c943f1567a5217a2d5f5d8bc018a65b63c505342797169e"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.479659 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" event={"ID":"a2cba7d5-652e-4c41-80a0-5477f682832f","Type":"ContainerStarted","Data":"28dfcb781f1e90043b3ad63e315089af260075327d6b83f895f5f5876fd70528"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.484868 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" event={"ID":"0386d75d-f624-4e2d-a804-2a9abaec1f71","Type":"ContainerStarted","Data":"dba882368d88df4a4d2a5fc9e35fd2ec6be74febb2aee603c2cdf719d5b2401e"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.486516 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" event={"ID":"3367d1d5-e9eb-44ad-a1a0-27fba7276c74","Type":"ContainerStarted","Data":"53ca395a7ae7d96a2b475f203d05a2c634e6c7ebbf8cace2a02e0da72b59ffc3"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.491193 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" event={"ID":"aa818711-83ce-4c93-b048-ef40a01fdb04","Type":"ContainerStarted","Data":"52dee5af04b71c996bfcf6fa6b6adaf7e08f8c1611dbac7b405a3dd1a91316f3"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.491255 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" event={"ID":"aa818711-83ce-4c93-b048-ef40a01fdb04","Type":"ContainerStarted","Data":"3f77c2cc169ad7beaa69c923e051dabcb416b695b5c1debb725f758a64920405"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.498535 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-67bnn" event={"ID":"f2f89446-c3e7-45dc-9170-c954ffa8c445","Type":"ContainerStarted","Data":"84ec4c3bc907ce8812a01d283577f19e1c419c75fc799fca0c327ecefb92a67a"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.499456 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.500622 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" event={"ID":"83ef7482-dbe0-429c-8cc6-d3dbef3768fb","Type":"ContainerStarted","Data":"f84909386270b1a1fd479b4cf53cfb5145beebd01a6086cde946742ecc2ef4d7"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.502340 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29409120-st7sv" event={"ID":"e969ab94-0cbc-487b-944e-b8b18e633127","Type":"ContainerStarted","Data":"d3c3a4003e00fbcdc4fbb77fd3508060ce36fc77890b1f632387c0d3b7980f14"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.504785 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" event={"ID":"2a047a1e-0e7f-474d-8026-71f3cb40d657","Type":"ContainerStarted","Data":"840d01a43ab9ebaeba11a111d0af88ed59b56a45581cea71043d82d885c9b0af"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.506791 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-pfkf6" podStartSLOduration=126.506780109 podStartE2EDuration="2m6.506780109s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.505755501 +0000 UTC m=+151.286524585" watchObservedRunningTime="2025-12-01 00:08:50.506780109 +0000 UTC m=+151.287549183" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.507295 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wwxb2" event={"ID":"e342b46d-339c-4903-b2ca-46ee21ba99aa","Type":"ContainerStarted","Data":"bf1df788d57bf301dfcc555d984e92336a0954012861b3994c972bf6f3060f1b"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.508260 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" podStartSLOduration=126.508248835 podStartE2EDuration="2m6.508248835s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.484512102 +0000 UTC m=+151.265281176" watchObservedRunningTime="2025-12-01 00:08:50.508248835 +0000 UTC m=+151.289017909" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.510756 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" event={"ID":"608d1635-c6ea-474a-9a40-99196daa0ae0","Type":"ContainerStarted","Data":"b13f643ff6ad2314742a85768a742739bc3a3d218f6a2fc6b4991a31a19dbbf0"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.512310 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" event={"ID":"48ac6c9b-1a8b-4dee-9e40-26f918718b9d","Type":"ContainerStarted","Data":"002c4524cdabbd864d1a81f1a58557bf11f12d71ccb91a8d28d3f9780c72af0b"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.513407 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.514149 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" event={"ID":"8f77b310-1408-4832-966f-396fbf5c2aa9","Type":"ContainerStarted","Data":"a641af27e6e98bec17317bdae22d761d3aab25fd0e15eb86c2013eff9aa61aa2"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.515182 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kgx79" event={"ID":"b2b99353-18b2-4200-a85f-6b463bbed78b","Type":"ContainerStarted","Data":"7d5459984f189cad175940e0f62e1c549c8960f20819457580133268e678b72e"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.516710 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" event={"ID":"245b49c8-86a4-4b49-83d5-c915905958a3","Type":"ContainerStarted","Data":"60981807d846476f943b506e9a9d8eb6c713364513dfb995c755730596f425d0"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.519729 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cvl5d" event={"ID":"02015561-01e9-4645-be51-e5da33595503","Type":"ContainerStarted","Data":"bd3705f4648677a6b4288dee866957997734d6ed2c46ee1ddc3ef2ff78e12130"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.520663 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerStarted","Data":"1ce7e3feaf70425ba65d9e6e0918ef409433044bbb976b2310b07ed91b36c21a"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.521501 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" event={"ID":"cddf3e49-1d9d-4ede-9823-4e92a2392585","Type":"ContainerStarted","Data":"f2861c29f22b3ee3a1baed5950f368831366426950c2d71b93ced0343eb5261d"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.522287 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" event={"ID":"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136","Type":"ContainerStarted","Data":"2cc941032f0e90430de5ce4b582ee9ab2c9eb418a09ad3316f47549a162491a1"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.522954 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" event={"ID":"b7519ac9-b09f-4169-bf4d-b6ec5849661c","Type":"ContainerStarted","Data":"8496e1e44adfcc127e9f739b88974bd3c9c6ceac432959149b0db9af97ef2b33"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.523217 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-st4n7" podStartSLOduration=126.523203647 podStartE2EDuration="2m6.523203647s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.520765624 +0000 UTC m=+151.301534708" watchObservedRunningTime="2025-12-01 00:08:50.523203647 +0000 UTC m=+151.303972731" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.528348 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" event={"ID":"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f","Type":"ContainerStarted","Data":"72aa4a6bdaaff301eaadaba344af6d207e7d16c5d24a4c3ccc0207a81ef1fadc"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.529250 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" event={"ID":"bbb31493-49be-4d93-9da9-9e32b8ba0b99","Type":"ContainerStarted","Data":"23254386608cf68fe82701318e2e45a88084e341673dbb817d47973dd2330712"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.529941 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" event={"ID":"435c27d7-a826-4c55-af67-f0cb995b4447","Type":"ContainerStarted","Data":"1dda661d5ca17b8787671454ee5a5b85e07bcfbb76d67d49a6972c2c323f99df"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.531201 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" event={"ID":"0a045872-6e5e-4a32-a002-af9b49d7be80","Type":"ContainerStarted","Data":"6a7cda30129808c90bb4062899abda36b23274e7244583e545b963e9710f3e11"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.531991 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" event={"ID":"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53","Type":"ContainerStarted","Data":"3d4b93d51cb0666defbcd364ecdd7959bcfbb88762c954e76eded3941b8e02d5"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.532960 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" event={"ID":"6d087dc6-2a31-4cae-88e2-283242b45f38","Type":"ContainerStarted","Data":"41a10757abc1f4485024c53df14f14f3d66e46729df6ba7aec5574d7ea16c47b"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.536747 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-557lc" event={"ID":"90578ce0-0758-4827-bc5b-d1d8ca39148e","Type":"ContainerStarted","Data":"ee83bc01b5e419fb74b3932d5843f482bf4f7b3f5ab71ff2e1f84630ea3f8a7a"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.539249 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mn4nj" podStartSLOduration=125.539201618 podStartE2EDuration="2m5.539201618s" podCreationTimestamp="2025-12-01 00:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.534336335 +0000 UTC m=+151.315105409" watchObservedRunningTime="2025-12-01 00:08:50.539201618 +0000 UTC m=+151.319970682" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.542813 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6fqvb" event={"ID":"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c","Type":"ContainerStarted","Data":"6ff3e679e1b86087c7120643eeb1d2ea554483b8cc38b635e8312623e9761b83"} Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.542858 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.545913 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.546084 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.046056055 +0000 UTC m=+151.826825129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.546131 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.546459 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.04644708 +0000 UTC m=+151.827216154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.554245 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-67bnn" podStartSLOduration=126.554223302 podStartE2EDuration="2m6.554223302s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.550835505 +0000 UTC m=+151.331604609" watchObservedRunningTime="2025-12-01 00:08:50.554223302 +0000 UTC m=+151.334992376" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.565869 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" podStartSLOduration=126.565848339 podStartE2EDuration="2m6.565848339s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.56375458 +0000 UTC m=+151.344523664" watchObservedRunningTime="2025-12-01 00:08:50.565848339 +0000 UTC m=+151.346617413" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.581026 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kgx79" podStartSLOduration=6.580996198 podStartE2EDuration="6.580996198s" podCreationTimestamp="2025-12-01 00:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.578270286 +0000 UTC m=+151.359039360" watchObservedRunningTime="2025-12-01 00:08:50.580996198 +0000 UTC m=+151.361765272" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.603214 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qjvv5" podStartSLOduration=126.603188932 podStartE2EDuration="2m6.603188932s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.590108031 +0000 UTC m=+151.370877135" watchObservedRunningTime="2025-12-01 00:08:50.603188932 +0000 UTC m=+151.383958006" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.614887 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wwxb2" podStartSLOduration=126.614871692 podStartE2EDuration="2m6.614871692s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.613404586 +0000 UTC m=+151.394173660" watchObservedRunningTime="2025-12-01 00:08:50.614871692 +0000 UTC m=+151.395640766" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.632052 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-vnq48" podStartSLOduration=126.632026847 podStartE2EDuration="2m6.632026847s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.626311612 +0000 UTC m=+151.407080686" watchObservedRunningTime="2025-12-01 00:08:50.632026847 +0000 UTC m=+151.412795931" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.647338 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.648872 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.148852509 +0000 UTC m=+151.929621603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.697876 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v4cjt" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.715362 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wlrts" podStartSLOduration=126.715340147 podStartE2EDuration="2m6.715340147s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:50.695669238 +0000 UTC m=+151.476438312" watchObservedRunningTime="2025-12-01 00:08:50.715340147 +0000 UTC m=+151.496109221" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.752319 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.752718 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.252676051 +0000 UTC m=+152.033445125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.854016 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.864888 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.364854157 +0000 UTC m=+152.145623321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.918654 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.930736 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:50 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:50 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:50 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.930785 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:50 crc kubenswrapper[4846]: I1201 00:08:50.956341 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:50 crc kubenswrapper[4846]: E1201 00:08:50.956829 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.456804182 +0000 UTC m=+152.237573256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.059296 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.059469 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.559444519 +0000 UTC m=+152.340213583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.059734 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.060040 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.560031342 +0000 UTC m=+152.340800416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.134740 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-67bnn" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.161540 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.162074 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.662054625 +0000 UTC m=+152.442823699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.270522 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.270940 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.770927087 +0000 UTC m=+152.551696161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.378088 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.378384 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.878370665 +0000 UTC m=+152.659139739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.479487 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.479944 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:51.979928102 +0000 UTC m=+152.760697176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.583192 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.583468 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.083453913 +0000 UTC m=+152.864222987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.613454 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29409120-st7sv" event={"ID":"e969ab94-0cbc-487b-944e-b8b18e633127","Type":"ContainerStarted","Data":"c383a4ff8642e5d366adadbf5477dc45d67da9eed884998b7b38fca0de0f8ffc"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.618191 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" event={"ID":"8b3271d0-d2f7-4fd6-81c3-a457840c1ab3","Type":"ContainerStarted","Data":"4d076fcc95ef3a86d65efdb1c9d7e158f4a79540ec2d8b771d0168e59a7c9ba9"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.618703 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.635504 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.636742 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" event={"ID":"3367d1d5-e9eb-44ad-a1a0-27fba7276c74","Type":"ContainerStarted","Data":"17cf057e7c0f978e323b5da1f176912f235762b8dd2201ab02f26e99875c07ad"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.644223 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29409120-st7sv" podStartSLOduration=127.644205886 podStartE2EDuration="2m7.644205886s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:51.641181812 +0000 UTC m=+152.421950886" watchObservedRunningTime="2025-12-01 00:08:51.644205886 +0000 UTC m=+152.424974960" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.647420 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-557lc" event={"ID":"90578ce0-0758-4827-bc5b-d1d8ca39148e","Type":"ContainerStarted","Data":"5b1f4a2f9a3113819d2b33d1602b06f61c36a33dfd4f6348961a3b8eb1717f3b"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.648224 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.660207 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2j7th" podStartSLOduration=127.660190516 podStartE2EDuration="2m7.660190516s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:51.657272396 +0000 UTC m=+152.438041470" watchObservedRunningTime="2025-12-01 00:08:51.660190516 +0000 UTC m=+152.440959590" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.661483 4846 patch_prober.go:28] interesting pod/downloads-7954f5f757-557lc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.661544 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-557lc" podUID="90578ce0-0758-4827-bc5b-d1d8ca39148e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.662172 4846 generic.go:334] "Generic (PLEG): container finished" podID="06c91556-9f39-425b-a247-d830eba2643c" containerID="7c25962f5bdcb341e17700227fa435992819dd2c16e06fd2f389768a76e78448" exitCode=0 Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.663106 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" event={"ID":"06c91556-9f39-425b-a247-d830eba2643c","Type":"ContainerDied","Data":"7c25962f5bdcb341e17700227fa435992819dd2c16e06fd2f389768a76e78448"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.676001 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-557lc" podStartSLOduration=127.67598327 podStartE2EDuration="2m7.67598327s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:51.674858358 +0000 UTC m=+152.455627442" watchObservedRunningTime="2025-12-01 00:08:51.67598327 +0000 UTC m=+152.456752344" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.677627 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" event={"ID":"8c9bfccc-0b80-41f2-84da-512482ad568a","Type":"ContainerStarted","Data":"31ead620bca0537b0e9082ec8c8bb38584e74fc14d54a1d13121cd10befcd68d"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.685243 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" event={"ID":"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136","Type":"ContainerStarted","Data":"f7be5448c001f0b68e800dac504e0ee6851d21447f2919b418898317b660548c"} Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.693441 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.695783 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.195747893 +0000 UTC m=+152.976517037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.707804 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cbn6d" podStartSLOduration=127.707783505 podStartE2EDuration="2m7.707783505s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:51.70499954 +0000 UTC m=+152.485768624" watchObservedRunningTime="2025-12-01 00:08:51.707783505 +0000 UTC m=+152.488552579" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.782850 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.799634 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.800548 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.30051364 +0000 UTC m=+153.081282724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.900826 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:51 crc kubenswrapper[4846]: E1201 00:08:51.901156 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.401144432 +0000 UTC m=+153.181913506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.925727 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:51 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:51 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:51 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:51 crc kubenswrapper[4846]: I1201 00:08:51.926058 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.001546 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.001746 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.501720412 +0000 UTC m=+153.282489496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.001820 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.002231 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.50220835 +0000 UTC m=+153.282977454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.102528 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.102739 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.602711077 +0000 UTC m=+153.383480161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.102846 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.103174 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.603163774 +0000 UTC m=+153.383932848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.203592 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.203773 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.703746074 +0000 UTC m=+153.484515148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.304752 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.305167 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.805148015 +0000 UTC m=+153.585917099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.405805 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.406034 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.906001346 +0000 UTC m=+153.686770430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.406371 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.406661 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:52.90664802 +0000 UTC m=+153.687417094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.507169 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.507572 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.007552302 +0000 UTC m=+153.788321376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.609031 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.609388 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.109372708 +0000 UTC m=+153.890141782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.700137 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" event={"ID":"bbb31493-49be-4d93-9da9-9e32b8ba0b99","Type":"ContainerStarted","Data":"e62aca763239319d0d0ec6ef85ea573440e62bb4b4e3e3419ebaff5644e39d49"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.701440 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" event={"ID":"eaeebea5-554d-4f4a-bcda-e21ccea7f8e0","Type":"ContainerStarted","Data":"6930ace42436d54f733ea4a67934a21c15c3c8506ccb8d45d9606089a254766b"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.705836 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" event={"ID":"6d087dc6-2a31-4cae-88e2-283242b45f38","Type":"ContainerStarted","Data":"46ac9df9fb4a80f8888e044d929ade1bfce1e6f8c357634e76780b268b947d61"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.707323 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" event={"ID":"c92de2dd-b856-4062-a258-af0e14f84942","Type":"ContainerStarted","Data":"7a9764a9f2cbe77f37e4f8757e89c47d7e7fb21a3d8d64b01031288332d2deb1"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.708714 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" event={"ID":"31064592-6043-412a-82e6-4eb313fa16a3","Type":"ContainerStarted","Data":"20b1175a3be0d6537b5c240b4f24f7956698bfafa9bf47a7c785e2b8368d96a7"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.709449 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.709819 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.209805604 +0000 UTC m=+153.990574678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.713237 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" event={"ID":"60a3bc94-e170-4e44-a5d3-52d353845365","Type":"ContainerStarted","Data":"edd0aaf25b4bd51d6b1a6acf967a4dfb9f9e17d7a8de342609350b0daf2f406a"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.715407 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" event={"ID":"c38a0117-8d8d-4fba-973f-ce27e8fbf920","Type":"ContainerStarted","Data":"5d6dbaa0e1bff9e160198c40f20f8ab0d476739262d014e2626048e0b7495932"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.717411 4846 generic.go:334] "Generic (PLEG): container finished" podID="608d1635-c6ea-474a-9a40-99196daa0ae0" containerID="aaa575ada14cd8c6ad291fa04ee13b5335fe062419f5a60c87b0aad562ecd079" exitCode=0 Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.717479 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" event={"ID":"608d1635-c6ea-474a-9a40-99196daa0ae0","Type":"ContainerDied","Data":"aaa575ada14cd8c6ad291fa04ee13b5335fe062419f5a60c87b0aad562ecd079"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.719221 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" event={"ID":"896b2f55-d0aa-4b5c-9d7d-7d814a42fc53","Type":"ContainerStarted","Data":"4ed5359d21fc0ef8941260f270aaa019e0ca76c477c44fef27af751c2e3684b2"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.722188 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" event={"ID":"a2cba7d5-652e-4c41-80a0-5477f682832f","Type":"ContainerStarted","Data":"9e2261aba3f2c2ad024d339171463fa2fcb79fa7c999b2d179cfa6341b4b9d1b"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.724240 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" event={"ID":"245b49c8-86a4-4b49-83d5-c915905958a3","Type":"ContainerStarted","Data":"4d944611a28a395833f94e3b174de66f00a01b6eede42872da2c6a5040dd7b05"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.725644 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" event={"ID":"afeb8cab-bf35-4a58-9dbb-22a1d3238b7f","Type":"ContainerStarted","Data":"214529fd232e1eb48cc9926b0a02cba35c9415a6d91945365049cd5cd9f7bf44"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.727401 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" event={"ID":"3367d1d5-e9eb-44ad-a1a0-27fba7276c74","Type":"ContainerStarted","Data":"1d9f36f89261127e7ebc2083bb762a9d9590da24631ac7978cc871445790ef33"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.728725 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" event={"ID":"cddf3e49-1d9d-4ede-9823-4e92a2392585","Type":"ContainerStarted","Data":"fdac2510aacdb9f9fd49006dc7957c8788d777af1c8d881453ac9f19072d8cfd"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.730055 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xf98s" podStartSLOduration=128.730034513 podStartE2EDuration="2m8.730034513s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:52.715742456 +0000 UTC m=+153.496511540" watchObservedRunningTime="2025-12-01 00:08:52.730034513 +0000 UTC m=+153.510803587" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.755587 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" event={"ID":"435c27d7-a826-4c55-af67-f0cb995b4447","Type":"ContainerStarted","Data":"f8a4b319b109c800b789bf297c1809ed5165072e43ef2aaaf9f9baf2de3b6af7"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.775905 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerStarted","Data":"803063f3b8156c1593a584d6d9c6be546f1be1f9df7b37d498b0b1c6270ccd38"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.777348 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.789916 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" event={"ID":"0a045872-6e5e-4a32-a002-af9b49d7be80","Type":"ContainerStarted","Data":"0e26c85fe28281d2f835b64c85ad0e7d210020a8d5d7e058c500d3310a633dce"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.804811 4846 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6rhj6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.804887 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.809483 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" event={"ID":"b7519ac9-b09f-4169-bf4d-b6ec5849661c","Type":"ContainerStarted","Data":"54460d4e253bb966291bdde71aff3717ab198e7fe1019f2e2c4a4aba519f3a1f"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.810259 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.814621 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.838286 4846 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lq6bl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.838343 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.838530 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.338513091 +0000 UTC m=+154.119282255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.868981 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" event={"ID":"0386d75d-f624-4e2d-a804-2a9abaec1f71","Type":"ContainerStarted","Data":"febbcb08826808bac37d99074ebcd91ff11f841cb715fedb02fbb58e021a6d2a"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.870200 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rklqk" podStartSLOduration=128.870189201 podStartE2EDuration="2m8.870189201s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:52.850158147 +0000 UTC m=+153.630927221" watchObservedRunningTime="2025-12-01 00:08:52.870189201 +0000 UTC m=+153.650958275" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.912808 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cvl5d" event={"ID":"02015561-01e9-4645-be51-e5da33595503","Type":"ContainerStarted","Data":"0e8d38622c75793eaaf90baf7f7d8287921e56e0c1943f950391ab741fb13040"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.917611 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:52 crc kubenswrapper[4846]: E1201 00:08:52.933218 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.433195099 +0000 UTC m=+154.213964173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.938263 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:52 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:52 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:52 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.938316 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.943792 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6fqvb" event={"ID":"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c","Type":"ContainerStarted","Data":"4677056ab993189682c571c0be065fbc9758a209eb0766a1f23e4817190749f1"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.945821 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bls6r" podStartSLOduration=128.945793532 podStartE2EDuration="2m8.945793532s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:52.93405312 +0000 UTC m=+153.714822194" watchObservedRunningTime="2025-12-01 00:08:52.945793532 +0000 UTC m=+153.726562606" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.947757 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" event={"ID":"83ef7482-dbe0-429c-8cc6-d3dbef3768fb","Type":"ContainerStarted","Data":"81fed3667024a4ec0f07c73b4da9483915c45f3a0638f25b3beffd2655d13196"} Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.952079 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.952282 4846 patch_prober.go:28] interesting pod/downloads-7954f5f757-557lc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 01 00:08:52 crc kubenswrapper[4846]: I1201 00:08:52.952315 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-557lc" podUID="90578ce0-0758-4827-bc5b-d1d8ca39148e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.021513 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" podStartSLOduration=129.021495507 podStartE2EDuration="2m9.021495507s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:52.989524356 +0000 UTC m=+153.770293420" watchObservedRunningTime="2025-12-01 00:08:53.021495507 +0000 UTC m=+153.802264581" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.024435 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.025127 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.525108543 +0000 UTC m=+154.305877617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.084645 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-26vlv" podStartSLOduration=129.08462827 podStartE2EDuration="2m9.08462827s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:53.023481661 +0000 UTC m=+153.804250735" watchObservedRunningTime="2025-12-01 00:08:53.08462827 +0000 UTC m=+153.865397344" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.125349 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.126956 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.62693955 +0000 UTC m=+154.407708624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.127771 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podStartSLOduration=129.12773748 podStartE2EDuration="2m9.12773748s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:53.09421396 +0000 UTC m=+153.874983034" watchObservedRunningTime="2025-12-01 00:08:53.12773748 +0000 UTC m=+153.908506554" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.214534 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" podStartSLOduration=129.214508741 podStartE2EDuration="2m9.214508741s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:53.128914894 +0000 UTC m=+153.909683968" watchObservedRunningTime="2025-12-01 00:08:53.214508741 +0000 UTC m=+153.995277825" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.215301 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" podStartSLOduration=129.21529531 podStartE2EDuration="2m9.21529531s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:53.213079207 +0000 UTC m=+153.993848291" watchObservedRunningTime="2025-12-01 00:08:53.21529531 +0000 UTC m=+153.996064384" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.235452 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.237049 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.737036447 +0000 UTC m=+154.517805521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.254580 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w968z"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.255905 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.254618 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4mzv7" podStartSLOduration=129.254598368 podStartE2EDuration="2m9.254598368s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:53.254196712 +0000 UTC m=+154.034965786" watchObservedRunningTime="2025-12-01 00:08:53.254598368 +0000 UTC m=+154.035367442" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.279640 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.289407 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w968z"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.313649 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cvl5d" podStartSLOduration=9.313627716 podStartE2EDuration="9.313627716s" podCreationTimestamp="2025-12-01 00:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:53.311455945 +0000 UTC m=+154.092225029" watchObservedRunningTime="2025-12-01 00:08:53.313627716 +0000 UTC m=+154.094396790" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.341011 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.341224 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-catalog-content\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.341276 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kns\" (UniqueName: \"kubernetes.io/projected/ad68a9ea-9986-4c5e-a87f-69f9c237a066-kube-api-access-x5kns\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.341352 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-utilities\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.341521 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.841503144 +0000 UTC m=+154.622272228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.412098 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjt44"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.413045 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.418886 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.428786 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjt44"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.449495 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-utilities\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.449602 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-catalog-content\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.449640 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5kns\" (UniqueName: \"kubernetes.io/projected/ad68a9ea-9986-4c5e-a87f-69f9c237a066-kube-api-access-x5kns\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.449671 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.450027 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:53.950006631 +0000 UTC m=+154.730775705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.450321 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-catalog-content\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.450600 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-utilities\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.477262 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5kns\" (UniqueName: \"kubernetes.io/projected/ad68a9ea-9986-4c5e-a87f-69f9c237a066-kube-api-access-x5kns\") pod \"community-operators-w968z\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.551160 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.551375 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.05135003 +0000 UTC m=+154.832119104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.551427 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.551485 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-utilities\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.551505 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-catalog-content\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.551532 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcv4t\" (UniqueName: \"kubernetes.io/projected/02d89d24-0fd5-41c1-a392-27a63409d1c3-kube-api-access-hcv4t\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.551795 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.051788067 +0000 UTC m=+154.832557141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.641847 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.652419 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.652664 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-utilities\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.652722 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-catalog-content\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.652767 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcv4t\" (UniqueName: \"kubernetes.io/projected/02d89d24-0fd5-41c1-a392-27a63409d1c3-kube-api-access-hcv4t\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.653016 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.15297152 +0000 UTC m=+154.933740594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.653601 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-utilities\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.654134 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.654560 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-catalog-content\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.654473 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.154462235 +0000 UTC m=+154.935231309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.669160 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-97xhs"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.670412 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.685586 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcv4t\" (UniqueName: \"kubernetes.io/projected/02d89d24-0fd5-41c1-a392-27a63409d1c3-kube-api-access-hcv4t\") pod \"certified-operators-xjt44\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.733973 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.739844 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97xhs"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.756215 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.756553 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chdt\" (UniqueName: \"kubernetes.io/projected/45e0d9f8-77de-43d5-a450-e889f929df32-kube-api-access-4chdt\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.756600 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-catalog-content\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.756624 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-utilities\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.756837 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.256822283 +0000 UTC m=+155.037591347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.828533 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.836130 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ct8wp"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.837370 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.858107 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-utilities\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.858253 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.858283 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chdt\" (UniqueName: \"kubernetes.io/projected/45e0d9f8-77de-43d5-a450-e889f929df32-kube-api-access-4chdt\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.858312 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-catalog-content\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.858814 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-catalog-content\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.859099 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-utilities\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.859434 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.359419158 +0000 UTC m=+155.140188232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.898871 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ct8wp"] Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.901982 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chdt\" (UniqueName: \"kubernetes.io/projected/45e0d9f8-77de-43d5-a450-e889f929df32-kube-api-access-4chdt\") pod \"community-operators-97xhs\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.931095 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:53 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:53 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:53 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.931178 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.959208 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.959523 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-utilities\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.959563 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-catalog-content\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.959664 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc4kc\" (UniqueName: \"kubernetes.io/projected/26dc41c0-b440-494e-9bfa-2a70f3e16040-kube-api-access-kc4kc\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:53 crc kubenswrapper[4846]: E1201 00:08:53.959833 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.459813441 +0000 UTC m=+155.240582515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:53 crc kubenswrapper[4846]: I1201 00:08:53.983493 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" event={"ID":"83ef7482-dbe0-429c-8cc6-d3dbef3768fb","Type":"ContainerStarted","Data":"c2b845c70b438ed717c55223980c57f08878848918f75e47cd9742ed04ce532a"} Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.003053 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.044619 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" event={"ID":"a2cba7d5-652e-4c41-80a0-5477f682832f","Type":"ContainerStarted","Data":"6bb581a7f1e01b3438db0c8ca9e5f342d224428115e02123a43681a221abee21"} Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.060834 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.060885 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc4kc\" (UniqueName: \"kubernetes.io/projected/26dc41c0-b440-494e-9bfa-2a70f3e16040-kube-api-access-kc4kc\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.060941 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-utilities\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.060961 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-catalog-content\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.061453 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-catalog-content\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.061901 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.561889668 +0000 UTC m=+155.342658742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.062533 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-utilities\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.084519 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" event={"ID":"06c91556-9f39-425b-a247-d830eba2643c","Type":"ContainerStarted","Data":"ca8ed5a75908a5d4fdf82082ece160d21992a9acde4b03c3aa11feda714533f6"} Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.085580 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.100875 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc4kc\" (UniqueName: \"kubernetes.io/projected/26dc41c0-b440-494e-9bfa-2a70f3e16040-kube-api-access-kc4kc\") pod \"certified-operators-ct8wp\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.117340 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" podStartSLOduration=130.117321111 podStartE2EDuration="2m10.117321111s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.113993985 +0000 UTC m=+154.894763059" watchObservedRunningTime="2025-12-01 00:08:54.117321111 +0000 UTC m=+154.898090185" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.128055 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" event={"ID":"8f77b310-1408-4832-966f-396fbf5c2aa9","Type":"ContainerStarted","Data":"16caf69597b1b173caf28db2bebf250c954c80752315d2771b5663e2e2710d88"} Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.128878 4846 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6rhj6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.128943 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.136823 4846 patch_prober.go:28] interesting pod/downloads-7954f5f757-557lc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.136891 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-557lc" podUID="90578ce0-0758-4827-bc5b-d1d8ca39148e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.136965 4846 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lq6bl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.136984 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.171502 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" podStartSLOduration=130.171466915 podStartE2EDuration="2m10.171466915s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.156065897 +0000 UTC m=+154.936834981" watchObservedRunningTime="2025-12-01 00:08:54.171466915 +0000 UTC m=+154.952236149" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.174004 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.174451 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.674432316 +0000 UTC m=+155.455201390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.182828 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.211904 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-g2cv9" podStartSLOduration=130.211883394 podStartE2EDuration="2m10.211883394s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.209315128 +0000 UTC m=+154.990084212" watchObservedRunningTime="2025-12-01 00:08:54.211883394 +0000 UTC m=+154.992652468" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.277296 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.281108 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6szg" podStartSLOduration=130.281089705 podStartE2EDuration="2m10.281089705s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.247557825 +0000 UTC m=+155.028326899" watchObservedRunningTime="2025-12-01 00:08:54.281089705 +0000 UTC m=+155.061858789" Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.281948 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.781932867 +0000 UTC m=+155.562701931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.378869 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.380079 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.880058735 +0000 UTC m=+155.660827809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.397747 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kttfm" podStartSLOduration=130.397729199 podStartE2EDuration="2m10.397729199s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.282948706 +0000 UTC m=+155.063717780" watchObservedRunningTime="2025-12-01 00:08:54.397729199 +0000 UTC m=+155.178498273" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.409708 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" podStartSLOduration=130.409671217 podStartE2EDuration="2m10.409671217s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.3969682 +0000 UTC m=+155.177737284" watchObservedRunningTime="2025-12-01 00:08:54.409671217 +0000 UTC m=+155.190440291" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.483159 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.483729 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:54.98371605 +0000 UTC m=+155.764485124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.541494 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" podStartSLOduration=130.54146091 podStartE2EDuration="2m10.54146091s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.51722524 +0000 UTC m=+155.297994314" watchObservedRunningTime="2025-12-01 00:08:54.54146091 +0000 UTC m=+155.322229984" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.555872 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w968z"] Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.585338 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.585873 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.085854339 +0000 UTC m=+155.866623413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.690429 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.690794 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.190780512 +0000 UTC m=+155.971549586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.792832 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.793293 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.293268804 +0000 UTC m=+156.074037878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.793409 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.793745 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.293731561 +0000 UTC m=+156.074500635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.850100 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s4kv4" podStartSLOduration=130.850082119 podStartE2EDuration="2m10.850082119s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:54.635185843 +0000 UTC m=+155.415954937" watchObservedRunningTime="2025-12-01 00:08:54.850082119 +0000 UTC m=+155.630851193" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.851049 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjt44"] Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.894211 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:54 crc kubenswrapper[4846]: E1201 00:08:54.894640 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.394624803 +0000 UTC m=+156.175393877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.923080 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:54 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:54 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:54 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.923140 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:54 crc kubenswrapper[4846]: I1201 00:08:54.970923 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-97xhs"] Dec 01 00:08:54 crc kubenswrapper[4846]: W1201 00:08:54.988374 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45e0d9f8_77de_43d5_a450_e889f929df32.slice/crio-1f115b8b6eb17d6aafb1cbe6bf4a07ec8c332d05cb8596ceabfdcb7a542d6638 WatchSource:0}: Error finding container 1f115b8b6eb17d6aafb1cbe6bf4a07ec8c332d05cb8596ceabfdcb7a542d6638: Status 404 returned error can't find the container with id 1f115b8b6eb17d6aafb1cbe6bf4a07ec8c332d05cb8596ceabfdcb7a542d6638 Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:54.997522 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:54.997896 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.497885004 +0000 UTC m=+156.278654078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.006005 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ct8wp"] Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.098239 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.098392 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.59836391 +0000 UTC m=+156.379132984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.098569 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.098861 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.598852528 +0000 UTC m=+156.379621602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.142785 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerStarted","Data":"1f115b8b6eb17d6aafb1cbe6bf4a07ec8c332d05cb8596ceabfdcb7a542d6638"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.152780 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" event={"ID":"435c27d7-a826-4c55-af67-f0cb995b4447","Type":"ContainerStarted","Data":"5b8b018d59eb80305529f6be16a3c00720975d658c5a17c6da61f12e319b8589"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.157839 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerStarted","Data":"7518d64a75c34dea9e43f11d9fbb6c5436bfbc0f6dbd04736f8aab6b9efb068b"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.160861 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerStarted","Data":"e89bedf594e477d24eae97fd6a5904f530932c385250810ece81ce7b81f1f24e"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.168017 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6fqvb" event={"ID":"5c778133-16f9-4ac3-b9ae-6e93df7d1e0c","Type":"ContainerStarted","Data":"5646cc99ccaccad0a4d2d86aa47ea7615bf19d0bbf68fef18915e24dc89808d4"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.169386 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct8wp" event={"ID":"26dc41c0-b440-494e-9bfa-2a70f3e16040","Type":"ContainerStarted","Data":"58b5abd201a345d069a48a0ab01b9c2e8509ea4e7251798bfadf6186fc04a709"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.175561 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" event={"ID":"0386d75d-f624-4e2d-a804-2a9abaec1f71","Type":"ContainerStarted","Data":"aa25ac4c0ed35e9120dc53d5514b9362ff9cbfe6ea2452577156c7055f1e0a86"} Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.175751 4846 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6rhj6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.175787 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.178251 4846 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lsjh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.178279 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" podUID="06c91556-9f39-425b-a247-d830eba2643c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.201959 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.202211 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.702181441 +0000 UTC m=+156.482950515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.202286 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.202596 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.702574666 +0000 UTC m=+156.483343740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.202648 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhgcq" podStartSLOduration=131.202625028 podStartE2EDuration="2m11.202625028s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:55.202221403 +0000 UTC m=+155.982990477" watchObservedRunningTime="2025-12-01 00:08:55.202625028 +0000 UTC m=+155.983394102" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.214432 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdf9f"] Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.215971 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.229923 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.249039 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdf9f"] Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.306242 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.306545 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zqv\" (UniqueName: \"kubernetes.io/projected/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-kube-api-access-98zqv\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.306603 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-utilities\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.306705 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-catalog-content\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.307326 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.807311603 +0000 UTC m=+156.588080677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.408779 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zqv\" (UniqueName: \"kubernetes.io/projected/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-kube-api-access-98zqv\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.409049 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.409069 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-utilities\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.409131 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-catalog-content\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.409524 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:55.909504643 +0000 UTC m=+156.690273717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.409565 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-utilities\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.409606 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-catalog-content\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.420166 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.420336 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.446354 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zqv\" (UniqueName: \"kubernetes.io/projected/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-kube-api-access-98zqv\") pod \"redhat-marketplace-sdf9f\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.510074 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.510319 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.01027895 +0000 UTC m=+156.791048034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.510507 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.510872 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.010854962 +0000 UTC m=+156.791624036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.539861 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.611585 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.611825 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.111785785 +0000 UTC m=+156.892554859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.612108 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.612506 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.112489612 +0000 UTC m=+156.893258686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.626659 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44ddl"] Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.627798 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.715612 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.716450 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-utilities\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.716614 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-catalog-content\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.716740 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsc66\" (UniqueName: \"kubernetes.io/projected/ba05fe92-6ff2-4f5e-9f60-2948afa23445-kube-api-access-fsc66\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.716941 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.216909726 +0000 UTC m=+156.997678800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.727192 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44ddl"] Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.810666 4846 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lsjh9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.810715 4846 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lsjh9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.810730 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" podUID="06c91556-9f39-425b-a247-d830eba2643c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.810747 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" podUID="06c91556-9f39-425b-a247-d830eba2643c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.827698 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-catalog-content\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.827797 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsc66\" (UniqueName: \"kubernetes.io/projected/ba05fe92-6ff2-4f5e-9f60-2948afa23445-kube-api-access-fsc66\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.827828 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-utilities\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.827892 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.828173 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-catalog-content\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.828211 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.328198189 +0000 UTC m=+157.108967263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.828450 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-utilities\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.875072 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsc66\" (UniqueName: \"kubernetes.io/projected/ba05fe92-6ff2-4f5e-9f60-2948afa23445-kube-api-access-fsc66\") pod \"redhat-marketplace-44ddl\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.933397 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:55 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:55 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:55 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.933752 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.934005 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:55 crc kubenswrapper[4846]: E1201 00:08:55.934366 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.434344728 +0000 UTC m=+157.215113812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.943988 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.983827 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:55 crc kubenswrapper[4846]: I1201 00:08:55.983881 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.013880 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.039178 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.039655 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.539643925 +0000 UTC m=+157.320412999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.140679 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.140958 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.640926402 +0000 UTC m=+157.421695496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.141097 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.141447 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.641438671 +0000 UTC m=+157.422207745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.191913 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdf9f"] Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.227558 4846 generic.go:334] "Generic (PLEG): container finished" podID="45e0d9f8-77de-43d5-a450-e889f929df32" containerID="025e1d9215763f6f6621f54a95fe598643f1f08fa734a025c34b16f2f144321d" exitCode=0 Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.227707 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerDied","Data":"025e1d9215763f6f6621f54a95fe598643f1f08fa734a025c34b16f2f144321d"} Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.233424 4846 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.242019 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.242587 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.742569012 +0000 UTC m=+157.523338086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.245089 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" event={"ID":"608d1635-c6ea-474a-9a40-99196daa0ae0","Type":"ContainerStarted","Data":"f1dc6051c545ec9b4e48eb49f164d8d638736a3033ff20afe1639799edb0d1cc"} Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.248396 4846 generic.go:334] "Generic (PLEG): container finished" podID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerID="1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba" exitCode=0 Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.248452 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerDied","Data":"1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba"} Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.259987 4846 generic.go:334] "Generic (PLEG): container finished" podID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerID="1d65eb3ec102d08516588790dd3d7551eaa5c0f6f63efed7dd36187a06844117" exitCode=0 Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.260060 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerDied","Data":"1d65eb3ec102d08516588790dd3d7551eaa5c0f6f63efed7dd36187a06844117"} Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.262137 4846 generic.go:334] "Generic (PLEG): container finished" podID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerID="93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8" exitCode=0 Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.263470 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct8wp" event={"ID":"26dc41c0-b440-494e-9bfa-2a70f3e16040","Type":"ContainerDied","Data":"93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8"} Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.263499 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6fqvb" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.291249 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-685b8" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.313372 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gwwrs" podStartSLOduration=132.313349061 podStartE2EDuration="2m12.313349061s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:56.309034609 +0000 UTC m=+157.089803673" watchObservedRunningTime="2025-12-01 00:08:56.313349061 +0000 UTC m=+157.094118135" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.344380 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.344913 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.844895707 +0000 UTC m=+157.625664791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.363788 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lgjgk" podStartSLOduration=132.363655792 podStartE2EDuration="2m12.363655792s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:56.359162723 +0000 UTC m=+157.139931797" watchObservedRunningTime="2025-12-01 00:08:56.363655792 +0000 UTC m=+157.144424876" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.445468 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.445944 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:56.945929564 +0000 UTC m=+157.726698628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.460744 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-k87xr" podStartSLOduration=132.460729461 podStartE2EDuration="2m12.460729461s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:56.421147363 +0000 UTC m=+157.201916447" watchObservedRunningTime="2025-12-01 00:08:56.460729461 +0000 UTC m=+157.241498535" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.511529 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6fqvb" podStartSLOduration=12.511507589 podStartE2EDuration="12.511507589s" podCreationTimestamp="2025-12-01 00:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:56.50677223 +0000 UTC m=+157.287541314" watchObservedRunningTime="2025-12-01 00:08:56.511507589 +0000 UTC m=+157.292276663" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.511605 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44ddl"] Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.539502 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lsjh9" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.549063 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.549538 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.049523477 +0000 UTC m=+157.830292551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.648400 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8j266"] Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.649631 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.650139 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.650427 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.150411059 +0000 UTC m=+157.931180133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.652538 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.653254 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.153244816 +0000 UTC m=+157.934013890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.660802 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.666191 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8j266"] Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.762606 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.762838 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.262809743 +0000 UTC m=+158.043578817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.763807 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-utilities\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.763883 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-catalog-content\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.764053 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jctc\" (UniqueName: \"kubernetes.io/projected/54eba203-c984-4df6-91bc-ba04e655e541-kube-api-access-4jctc\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.764169 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.764519 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.264507027 +0000 UTC m=+158.045276101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.833001 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.833065 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.834668 4846 patch_prober.go:28] interesting pod/console-f9d7485db-pfkf6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.834755 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pfkf6" podUID="b5a36407-b124-4956-b91c-3be1a6cfa4b3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.841154 4846 patch_prober.go:28] interesting pod/downloads-7954f5f757-557lc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.841230 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-557lc" podUID="90578ce0-0758-4827-bc5b-d1d8ca39148e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.841490 4846 patch_prober.go:28] interesting pod/downloads-7954f5f757-557lc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.841562 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-557lc" podUID="90578ce0-0758-4827-bc5b-d1d8ca39148e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.865459 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.865786 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-utilities\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.865815 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-catalog-content\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.865872 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jctc\" (UniqueName: \"kubernetes.io/projected/54eba203-c984-4df6-91bc-ba04e655e541-kube-api-access-4jctc\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.866264 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.366243411 +0000 UTC m=+158.147012485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.867038 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-utilities\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.867346 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-catalog-content\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.899855 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jctc\" (UniqueName: \"kubernetes.io/projected/54eba203-c984-4df6-91bc-ba04e655e541-kube-api-access-4jctc\") pod \"redhat-operators-8j266\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.919009 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.921747 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:56 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:56 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:56 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.921839 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.986177 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:08:56 crc kubenswrapper[4846]: I1201 00:08:56.987008 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:56 crc kubenswrapper[4846]: E1201 00:08:56.988126 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.48811334 +0000 UTC m=+158.268882414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.026571 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-95f2l"] Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.030360 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.051026 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-95f2l"] Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.088778 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.089547 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.589522642 +0000 UTC m=+158.370291716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.190630 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.190705 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lss6\" (UniqueName: \"kubernetes.io/projected/6fe501d3-5ba5-4617-a355-0f69d5737dc4-kube-api-access-8lss6\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.190751 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-catalog-content\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.190829 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-utilities\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.191091 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.691073378 +0000 UTC m=+158.471842452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.233137 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.270664 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8j266"] Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.271814 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" event={"ID":"608d1635-c6ea-474a-9a40-99196daa0ae0","Type":"ContainerStarted","Data":"e7125d00c420a9641a425735aa5eeb79c797eac9f6ec6c60b77e8484ece5f8eb"} Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.272973 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerStarted","Data":"5b8561274f57734803139c634b1f182ce95cd2a93737a991466d039924dda4dd"} Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.274290 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" event={"ID":"8f77b310-1408-4832-966f-396fbf5c2aa9","Type":"ContainerStarted","Data":"b235bda36b45315b791ceac52debb9d0612e4bdb6c72561e805739f6a9e182e7"} Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.275672 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdf9f" event={"ID":"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8","Type":"ContainerStarted","Data":"c7ca102c7cea765e8c1351a3a00d516c890def3a37f95aea12c06272569244c3"} Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.291900 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.292294 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-utilities\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.292344 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lss6\" (UniqueName: \"kubernetes.io/projected/6fe501d3-5ba5-4617-a355-0f69d5737dc4-kube-api-access-8lss6\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.292393 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-catalog-content\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.293136 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-catalog-content\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.293234 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.793214976 +0000 UTC m=+158.573984050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.293481 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-utilities\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.322917 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lss6\" (UniqueName: \"kubernetes.io/projected/6fe501d3-5ba5-4617-a355-0f69d5737dc4-kube-api-access-8lss6\") pod \"redhat-operators-95f2l\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.393174 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.394052 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.394376 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.894360818 +0000 UTC m=+158.675129992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.396837 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qjd4q" Dec 01 00:08:57 crc kubenswrapper[4846]: W1201 00:08:57.401083 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54eba203_c984_4df6_91bc_ba04e655e541.slice/crio-3bbe9a51eb958c2327c75d2aa6e924c01c8d10d70bea1cec6ef449ccbafb8755 WatchSource:0}: Error finding container 3bbe9a51eb958c2327c75d2aa6e924c01c8d10d70bea1cec6ef449ccbafb8755: Status 404 returned error can't find the container with id 3bbe9a51eb958c2327c75d2aa6e924c01c8d10d70bea1cec6ef449ccbafb8755 Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.407203 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.495527 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.496536 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:57.996521528 +0000 UTC m=+158.777290602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.521200 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.602658 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.602717 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.602970 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-01 00:08:58.102958428 +0000 UTC m=+158.883727502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbsfs" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.691648 4846 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.703157 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-95f2l"] Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.705009 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:57 crc kubenswrapper[4846]: E1201 00:08:57.706593 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-01 00:08:58.206574232 +0000 UTC m=+158.987343306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.746537 4846 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-01T00:08:57.691695962Z","Handler":null,"Name":""} Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.751617 4846 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.751734 4846 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.810607 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.814951 4846 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.814993 4846 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.842991 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbsfs\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.872002 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.912963 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.922882 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:57 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:57 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:57 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.922960 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:57 crc kubenswrapper[4846]: I1201 00:08:57.997114 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.155848 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbsfs"] Dec 01 00:08:58 crc kubenswrapper[4846]: W1201 00:08:58.167843 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae23581_006a_44dd_aae2_d85d847dda2e.slice/crio-41531b79fac5fe433039a9a4870b881af24f544e789c1f3bc326ed058cce73d1 WatchSource:0}: Error finding container 41531b79fac5fe433039a9a4870b881af24f544e789c1f3bc326ed058cce73d1: Status 404 returned error can't find the container with id 41531b79fac5fe433039a9a4870b881af24f544e789c1f3bc326ed058cce73d1 Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.284005 4846 generic.go:334] "Generic (PLEG): container finished" podID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerID="11debf85cd1a6c90cf566131dd2001d5ce58d6bac5b3e811e86a911ca29619ba" exitCode=0 Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.284090 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdf9f" event={"ID":"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8","Type":"ContainerDied","Data":"11debf85cd1a6c90cf566131dd2001d5ce58d6bac5b3e811e86a911ca29619ba"} Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.293892 4846 generic.go:334] "Generic (PLEG): container finished" podID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerID="130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c" exitCode=0 Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.293963 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerDied","Data":"130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c"} Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.299520 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" event={"ID":"8ae23581-006a-44dd-aae2-d85d847dda2e","Type":"ContainerStarted","Data":"41531b79fac5fe433039a9a4870b881af24f544e789c1f3bc326ed058cce73d1"} Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.306066 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerStarted","Data":"091a5bb0bdcd094ee559a0e90f91d2688e2e048539e3cf57d68e79641415b292"} Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.330494 4846 generic.go:334] "Generic (PLEG): container finished" podID="54eba203-c984-4df6-91bc-ba04e655e541" containerID="3ee859906487fc5c040fda17e43efc2ca2c8d01bfa960f97e0cf1f59d701a1c8" exitCode=0 Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.331506 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerDied","Data":"3ee859906487fc5c040fda17e43efc2ca2c8d01bfa960f97e0cf1f59d701a1c8"} Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.331535 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerStarted","Data":"3bbe9a51eb958c2327c75d2aa6e924c01c8d10d70bea1cec6ef449ccbafb8755"} Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.369988 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" podStartSLOduration=134.369968733 podStartE2EDuration="2m14.369968733s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:08:58.368797879 +0000 UTC m=+159.149566953" watchObservedRunningTime="2025-12-01 00:08:58.369968733 +0000 UTC m=+159.150737807" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.563771 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.564948 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.567036 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.567341 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.570659 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.728214 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.728304 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.834664 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.834756 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.835159 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.859722 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.880942 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.922565 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:58 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:58 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:58 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:58 crc kubenswrapper[4846]: I1201 00:08:58.922642 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.351529 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 01 00:08:59 crc kubenswrapper[4846]: W1201 00:08:59.371264 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod01f69c7a_0f2d_4335_8ad8_fe20e430506a.slice/crio-8d225541cc7c12a210394ac783993e2b6972ecf35a0b7be9a547d45a369fbf91 WatchSource:0}: Error finding container 8d225541cc7c12a210394ac783993e2b6972ecf35a0b7be9a547d45a369fbf91: Status 404 returned error can't find the container with id 8d225541cc7c12a210394ac783993e2b6972ecf35a0b7be9a547d45a369fbf91 Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.398783 4846 generic.go:334] "Generic (PLEG): container finished" podID="31064592-6043-412a-82e6-4eb313fa16a3" containerID="20b1175a3be0d6537b5c240b4f24f7956698bfafa9bf47a7c785e2b8368d96a7" exitCode=0 Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.398901 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" event={"ID":"31064592-6043-412a-82e6-4eb313fa16a3","Type":"ContainerDied","Data":"20b1175a3be0d6537b5c240b4f24f7956698bfafa9bf47a7c785e2b8368d96a7"} Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.403074 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerStarted","Data":"5cc9484a1ab26b7c42b52fc0cfec5995db2ead4b6735fa26e39a9580a0f340d9"} Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.595366 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.921423 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:08:59 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:08:59 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:08:59 crc kubenswrapper[4846]: healthz check failed Dec 01 00:08:59 crc kubenswrapper[4846]: I1201 00:08:59.921476 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.411129 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" event={"ID":"8f77b310-1408-4832-966f-396fbf5c2aa9","Type":"ContainerStarted","Data":"3eb80cf5fb64fdad8548495b7180e449e69f2ed3035241d3d370bda04bbef883"} Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.413889 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" event={"ID":"8ae23581-006a-44dd-aae2-d85d847dda2e","Type":"ContainerStarted","Data":"7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d"} Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.414789 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.416393 4846 generic.go:334] "Generic (PLEG): container finished" podID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerID="5cc9484a1ab26b7c42b52fc0cfec5995db2ead4b6735fa26e39a9580a0f340d9" exitCode=0 Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.416804 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerDied","Data":"5cc9484a1ab26b7c42b52fc0cfec5995db2ead4b6735fa26e39a9580a0f340d9"} Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.419277 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"01f69c7a-0f2d-4335-8ad8-fe20e430506a","Type":"ContainerStarted","Data":"f0baa422c33474bf29d68c8d9d39264f6a6be832aaa520dfa880bb3956b26eb3"} Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.419299 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"01f69c7a-0f2d-4335-8ad8-fe20e430506a","Type":"ContainerStarted","Data":"8d225541cc7c12a210394ac783993e2b6972ecf35a0b7be9a547d45a369fbf91"} Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.435316 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" podStartSLOduration=136.435298502 podStartE2EDuration="2m16.435298502s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:09:00.431569592 +0000 UTC m=+161.212338666" watchObservedRunningTime="2025-12-01 00:09:00.435298502 +0000 UTC m=+161.216067576" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.635534 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.771643 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31064592-6043-412a-82e6-4eb313fa16a3-secret-volume\") pod \"31064592-6043-412a-82e6-4eb313fa16a3\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.771783 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume\") pod \"31064592-6043-412a-82e6-4eb313fa16a3\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.771859 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq8z6\" (UniqueName: \"kubernetes.io/projected/31064592-6043-412a-82e6-4eb313fa16a3-kube-api-access-cq8z6\") pod \"31064592-6043-412a-82e6-4eb313fa16a3\" (UID: \"31064592-6043-412a-82e6-4eb313fa16a3\") " Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.772768 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "31064592-6043-412a-82e6-4eb313fa16a3" (UID: "31064592-6043-412a-82e6-4eb313fa16a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.794829 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31064592-6043-412a-82e6-4eb313fa16a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "31064592-6043-412a-82e6-4eb313fa16a3" (UID: "31064592-6043-412a-82e6-4eb313fa16a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.808824 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31064592-6043-412a-82e6-4eb313fa16a3-kube-api-access-cq8z6" (OuterVolumeSpecName: "kube-api-access-cq8z6") pod "31064592-6043-412a-82e6-4eb313fa16a3" (UID: "31064592-6043-412a-82e6-4eb313fa16a3"). InnerVolumeSpecName "kube-api-access-cq8z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.873479 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq8z6\" (UniqueName: \"kubernetes.io/projected/31064592-6043-412a-82e6-4eb313fa16a3-kube-api-access-cq8z6\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.873526 4846 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/31064592-6043-412a-82e6-4eb313fa16a3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.873539 4846 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31064592-6043-412a-82e6-4eb313fa16a3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.923525 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:00 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:00 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:00 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:00 crc kubenswrapper[4846]: I1201 00:09:00.923584 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.064669 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 00:09:01 crc kubenswrapper[4846]: E1201 00:09:01.065261 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31064592-6043-412a-82e6-4eb313fa16a3" containerName="collect-profiles" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.065277 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="31064592-6043-412a-82e6-4eb313fa16a3" containerName="collect-profiles" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.065385 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="31064592-6043-412a-82e6-4eb313fa16a3" containerName="collect-profiles" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.065865 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.067436 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.068050 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.077999 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.078100 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.078466 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.180340 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.180606 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.180735 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.195885 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.392290 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.450420 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.454841 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps" event={"ID":"31064592-6043-412a-82e6-4eb313fa16a3","Type":"ContainerDied","Data":"c063cb3444811a36ee1907c52e8c74f7af11406d4d2b604b536b11f982ba4344"} Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.454909 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c063cb3444811a36ee1907c52e8c74f7af11406d4d2b604b536b11f982ba4344" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.473580 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.473554132 podStartE2EDuration="3.473554132s" podCreationTimestamp="2025-12-01 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:09:01.46684405 +0000 UTC m=+162.247613124" watchObservedRunningTime="2025-12-01 00:09:01.473554132 +0000 UTC m=+162.254323206" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.618890 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.921563 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:01 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:01 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:01 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.921878 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.966143 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.966220 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:09:01 crc kubenswrapper[4846]: I1201 00:09:01.972646 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.456605 4846 generic.go:334] "Generic (PLEG): container finished" podID="01f69c7a-0f2d-4335-8ad8-fe20e430506a" containerID="f0baa422c33474bf29d68c8d9d39264f6a6be832aaa520dfa880bb3956b26eb3" exitCode=0 Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.456677 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"01f69c7a-0f2d-4335-8ad8-fe20e430506a","Type":"ContainerDied","Data":"f0baa422c33474bf29d68c8d9d39264f6a6be832aaa520dfa880bb3956b26eb3"} Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.469841 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0338af7b-7f32-4bff-adb8-3a5f13b2fd98","Type":"ContainerStarted","Data":"fd71a860f09f63fe400bbd06dc039b86083cef9191f21be8ffdeecc729944115"} Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.478766 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" event={"ID":"8f77b310-1408-4832-966f-396fbf5c2aa9","Type":"ContainerStarted","Data":"1fa10219cec7e12bb6dcde376bee42a576f324c7acae7947fb136c4f76702b75"} Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.484014 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tq72z" Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.501670 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-nq2mw" podStartSLOduration=18.50164651 podStartE2EDuration="18.50164651s" podCreationTimestamp="2025-12-01 00:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:09:02.498565604 +0000 UTC m=+163.279334678" watchObservedRunningTime="2025-12-01 00:09:02.50164651 +0000 UTC m=+163.282415584" Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.922588 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:02 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:02 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:02 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:02 crc kubenswrapper[4846]: I1201 00:09:02.922669 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:03 crc kubenswrapper[4846]: I1201 00:09:03.004344 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6fqvb" Dec 01 00:09:03 crc kubenswrapper[4846]: I1201 00:09:03.487782 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0338af7b-7f32-4bff-adb8-3a5f13b2fd98","Type":"ContainerStarted","Data":"af589b49c1aa98a70840ff6abd407ea7ffbc84e9c26f06559e73235a227ab007"} Dec 01 00:09:03 crc kubenswrapper[4846]: I1201 00:09:03.922274 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:03 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:03 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:03 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:03 crc kubenswrapper[4846]: I1201 00:09:03.922617 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:04 crc kubenswrapper[4846]: I1201 00:09:04.921415 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:04 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:04 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:04 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:04 crc kubenswrapper[4846]: I1201 00:09:04.921528 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:05 crc kubenswrapper[4846]: I1201 00:09:05.504134 4846 generic.go:334] "Generic (PLEG): container finished" podID="0338af7b-7f32-4bff-adb8-3a5f13b2fd98" containerID="af589b49c1aa98a70840ff6abd407ea7ffbc84e9c26f06559e73235a227ab007" exitCode=0 Dec 01 00:09:05 crc kubenswrapper[4846]: I1201 00:09:05.504183 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0338af7b-7f32-4bff-adb8-3a5f13b2fd98","Type":"ContainerDied","Data":"af589b49c1aa98a70840ff6abd407ea7ffbc84e9c26f06559e73235a227ab007"} Dec 01 00:09:05 crc kubenswrapper[4846]: I1201 00:09:05.942874 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:05 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:05 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:05 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:05 crc kubenswrapper[4846]: I1201 00:09:05.943181 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:06 crc kubenswrapper[4846]: I1201 00:09:06.833445 4846 patch_prober.go:28] interesting pod/console-f9d7485db-pfkf6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 01 00:09:06 crc kubenswrapper[4846]: I1201 00:09:06.833552 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pfkf6" podUID="b5a36407-b124-4956-b91c-3be1a6cfa4b3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 01 00:09:06 crc kubenswrapper[4846]: I1201 00:09:06.847122 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-557lc" Dec 01 00:09:06 crc kubenswrapper[4846]: I1201 00:09:06.921783 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:06 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:06 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:06 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:06 crc kubenswrapper[4846]: I1201 00:09:06.921872 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:07 crc kubenswrapper[4846]: I1201 00:09:07.482724 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:09:07 crc kubenswrapper[4846]: I1201 00:09:07.491091 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/219022f7-8f31-4021-9df8-733c23b34602-metrics-certs\") pod \"network-metrics-daemon-rl69z\" (UID: \"219022f7-8f31-4021-9df8-733c23b34602\") " pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:09:07 crc kubenswrapper[4846]: I1201 00:09:07.699530 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rl69z" Dec 01 00:09:07 crc kubenswrapper[4846]: I1201 00:09:07.922276 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:07 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:07 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:07 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:07 crc kubenswrapper[4846]: I1201 00:09:07.922332 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:08 crc kubenswrapper[4846]: I1201 00:09:08.921910 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:08 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:08 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:08 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:08 crc kubenswrapper[4846]: I1201 00:09:08.922024 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:09 crc kubenswrapper[4846]: I1201 00:09:09.921420 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:09 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:09 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:09 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:09 crc kubenswrapper[4846]: I1201 00:09:09.921513 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:10 crc kubenswrapper[4846]: I1201 00:09:10.921651 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:10 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:10 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:10 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:10 crc kubenswrapper[4846]: I1201 00:09:10.921934 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:11 crc kubenswrapper[4846]: I1201 00:09:11.922458 4846 patch_prober.go:28] interesting pod/router-default-5444994796-wwxb2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 01 00:09:11 crc kubenswrapper[4846]: [-]has-synced failed: reason withheld Dec 01 00:09:11 crc kubenswrapper[4846]: [+]process-running ok Dec 01 00:09:11 crc kubenswrapper[4846]: healthz check failed Dec 01 00:09:11 crc kubenswrapper[4846]: I1201 00:09:11.922539 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwxb2" podUID="e342b46d-339c-4903-b2ca-46ee21ba99aa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 01 00:09:12 crc kubenswrapper[4846]: I1201 00:09:12.921745 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:09:12 crc kubenswrapper[4846]: I1201 00:09:12.926307 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wwxb2" Dec 01 00:09:17 crc kubenswrapper[4846]: I1201 00:09:17.120117 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:09:17 crc kubenswrapper[4846]: I1201 00:09:17.125283 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-pfkf6" Dec 01 00:09:17 crc kubenswrapper[4846]: I1201 00:09:17.877180 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.185642 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.194467 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.311599 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kube-api-access\") pod \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.311721 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kube-api-access\") pod \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.311803 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kubelet-dir\") pod \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\" (UID: \"0338af7b-7f32-4bff-adb8-3a5f13b2fd98\") " Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.311831 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kubelet-dir\") pod \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\" (UID: \"01f69c7a-0f2d-4335-8ad8-fe20e430506a\") " Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.311878 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0338af7b-7f32-4bff-adb8-3a5f13b2fd98" (UID: "0338af7b-7f32-4bff-adb8-3a5f13b2fd98"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.311991 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "01f69c7a-0f2d-4335-8ad8-fe20e430506a" (UID: "01f69c7a-0f2d-4335-8ad8-fe20e430506a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.312229 4846 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.312259 4846 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.317062 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0338af7b-7f32-4bff-adb8-3a5f13b2fd98" (UID: "0338af7b-7f32-4bff-adb8-3a5f13b2fd98"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.317157 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "01f69c7a-0f2d-4335-8ad8-fe20e430506a" (UID: "01f69c7a-0f2d-4335-8ad8-fe20e430506a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.413652 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01f69c7a-0f2d-4335-8ad8-fe20e430506a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.413718 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0338af7b-7f32-4bff-adb8-3a5f13b2fd98-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.604165 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"01f69c7a-0f2d-4335-8ad8-fe20e430506a","Type":"ContainerDied","Data":"8d225541cc7c12a210394ac783993e2b6972ecf35a0b7be9a547d45a369fbf91"} Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.604215 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d225541cc7c12a210394ac783993e2b6972ecf35a0b7be9a547d45a369fbf91" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.604176 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.606091 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0338af7b-7f32-4bff-adb8-3a5f13b2fd98","Type":"ContainerDied","Data":"fd71a860f09f63fe400bbd06dc039b86083cef9191f21be8ffdeecc729944115"} Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.606130 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd71a860f09f63fe400bbd06dc039b86083cef9191f21be8ffdeecc729944115" Dec 01 00:09:22 crc kubenswrapper[4846]: I1201 00:09:22.606165 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 01 00:09:25 crc kubenswrapper[4846]: I1201 00:09:25.419407 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:09:25 crc kubenswrapper[4846]: I1201 00:09:25.420088 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:09:25 crc kubenswrapper[4846]: I1201 00:09:25.629758 4846 generic.go:334] "Generic (PLEG): container finished" podID="e969ab94-0cbc-487b-944e-b8b18e633127" containerID="c383a4ff8642e5d366adadbf5477dc45d67da9eed884998b7b38fca0de0f8ffc" exitCode=0 Dec 01 00:09:25 crc kubenswrapper[4846]: I1201 00:09:25.629835 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29409120-st7sv" event={"ID":"e969ab94-0cbc-487b-944e-b8b18e633127","Type":"ContainerDied","Data":"c383a4ff8642e5d366adadbf5477dc45d67da9eed884998b7b38fca0de0f8ffc"} Dec 01 00:09:26 crc kubenswrapper[4846]: I1201 00:09:26.625057 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 01 00:09:27 crc kubenswrapper[4846]: I1201 00:09:27.602310 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ct4bd" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.109378 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.189203 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z99k\" (UniqueName: \"kubernetes.io/projected/e969ab94-0cbc-487b-944e-b8b18e633127-kube-api-access-9z99k\") pod \"e969ab94-0cbc-487b-944e-b8b18e633127\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.189353 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e969ab94-0cbc-487b-944e-b8b18e633127-serviceca\") pod \"e969ab94-0cbc-487b-944e-b8b18e633127\" (UID: \"e969ab94-0cbc-487b-944e-b8b18e633127\") " Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.190588 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e969ab94-0cbc-487b-944e-b8b18e633127-serviceca" (OuterVolumeSpecName: "serviceca") pod "e969ab94-0cbc-487b-944e-b8b18e633127" (UID: "e969ab94-0cbc-487b-944e-b8b18e633127"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.195914 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e969ab94-0cbc-487b-944e-b8b18e633127-kube-api-access-9z99k" (OuterVolumeSpecName: "kube-api-access-9z99k") pod "e969ab94-0cbc-487b-944e-b8b18e633127" (UID: "e969ab94-0cbc-487b-944e-b8b18e633127"). InnerVolumeSpecName "kube-api-access-9z99k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.290919 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z99k\" (UniqueName: \"kubernetes.io/projected/e969ab94-0cbc-487b-944e-b8b18e633127-kube-api-access-9z99k\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.290988 4846 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e969ab94-0cbc-487b-944e-b8b18e633127-serviceca\") on node \"crc\" DevicePath \"\"" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.686011 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29409120-st7sv" event={"ID":"e969ab94-0cbc-487b-944e-b8b18e633127","Type":"ContainerDied","Data":"d3c3a4003e00fbcdc4fbb77fd3508060ce36fc77890b1f632387c0d3b7980f14"} Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.686058 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3c3a4003e00fbcdc4fbb77fd3508060ce36fc77890b1f632387c0d3b7980f14" Dec 01 00:09:34 crc kubenswrapper[4846]: I1201 00:09:34.686083 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29409120-st7sv" Dec 01 00:09:37 crc kubenswrapper[4846]: E1201 00:09:37.139628 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 00:09:37 crc kubenswrapper[4846]: E1201 00:09:37.140015 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4chdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-97xhs_openshift-marketplace(45e0d9f8-77de-43d5-a450-e889f929df32): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:09:37 crc kubenswrapper[4846]: E1201 00:09:37.141468 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-97xhs" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.076781 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 00:09:38 crc kubenswrapper[4846]: E1201 00:09:38.077175 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0338af7b-7f32-4bff-adb8-3a5f13b2fd98" containerName="pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077191 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="0338af7b-7f32-4bff-adb8-3a5f13b2fd98" containerName="pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: E1201 00:09:38.077214 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f69c7a-0f2d-4335-8ad8-fe20e430506a" containerName="pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077238 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f69c7a-0f2d-4335-8ad8-fe20e430506a" containerName="pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: E1201 00:09:38.077254 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e969ab94-0cbc-487b-944e-b8b18e633127" containerName="image-pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077263 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e969ab94-0cbc-487b-944e-b8b18e633127" containerName="image-pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077473 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="e969ab94-0cbc-487b-944e-b8b18e633127" containerName="image-pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077493 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="0338af7b-7f32-4bff-adb8-3a5f13b2fd98" containerName="pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077504 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f69c7a-0f2d-4335-8ad8-fe20e430506a" containerName="pruner" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.077999 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.085319 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.088313 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.089643 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.154520 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f98ada12-3624-482c-ba58-8f0df881923c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.154610 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98ada12-3624-482c-ba58-8f0df881923c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.257501 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f98ada12-3624-482c-ba58-8f0df881923c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.257575 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98ada12-3624-482c-ba58-8f0df881923c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.257762 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f98ada12-3624-482c-ba58-8f0df881923c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.275077 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98ada12-3624-482c-ba58-8f0df881923c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:38 crc kubenswrapper[4846]: I1201 00:09:38.415853 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:09:40 crc kubenswrapper[4846]: E1201 00:09:40.016262 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 01 00:09:40 crc kubenswrapper[4846]: E1201 00:09:40.016735 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5kns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w968z_openshift-marketplace(ad68a9ea-9986-4c5e-a87f-69f9c237a066): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:09:40 crc kubenswrapper[4846]: E1201 00:09:40.017866 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w968z" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.459815 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.460728 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.467975 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.510923 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.510991 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-var-lock\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.511266 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef6fd7f6-5551-4b1a-b743-84778b664a26-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.612323 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef6fd7f6-5551-4b1a-b743-84778b664a26-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.612383 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.612406 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-var-lock\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.612462 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:42 crc kubenswrapper[4846]: I1201 00:09:42.612507 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-var-lock\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:43 crc kubenswrapper[4846]: I1201 00:09:43.075380 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef6fd7f6-5551-4b1a-b743-84778b664a26-kube-api-access\") pod \"installer-9-crc\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:43 crc kubenswrapper[4846]: I1201 00:09:43.089563 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:09:48 crc kubenswrapper[4846]: E1201 00:09:48.789246 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w968z" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" Dec 01 00:09:48 crc kubenswrapper[4846]: E1201 00:09:48.811912 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 00:09:48 crc kubenswrapper[4846]: E1201 00:09:48.812087 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-95f2l_openshift-marketplace(6fe501d3-5ba5-4617-a355-0f69d5737dc4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:09:48 crc kubenswrapper[4846]: E1201 00:09:48.813876 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-95f2l" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" Dec 01 00:09:55 crc kubenswrapper[4846]: I1201 00:09:55.420676 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:09:55 crc kubenswrapper[4846]: I1201 00:09:55.420829 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:09:55 crc kubenswrapper[4846]: I1201 00:09:55.420925 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:09:55 crc kubenswrapper[4846]: I1201 00:09:55.421836 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:09:55 crc kubenswrapper[4846]: I1201 00:09:55.421983 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b" gracePeriod=600 Dec 01 00:10:01 crc kubenswrapper[4846]: E1201 00:10:01.613379 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 00:10:01 crc kubenswrapper[4846]: E1201 00:10:01.614516 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98zqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sdf9f_openshift-marketplace(eb0df655-cdf4-4a30-bd80-2c6ac270d5b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:10:01 crc kubenswrapper[4846]: E1201 00:10:01.615818 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sdf9f" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" Dec 01 00:10:01 crc kubenswrapper[4846]: E1201 00:10:01.617922 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 01 00:10:01 crc kubenswrapper[4846]: E1201 00:10:01.618234 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsc66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-44ddl_openshift-marketplace(ba05fe92-6ff2-4f5e-9f60-2948afa23445): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:10:01 crc kubenswrapper[4846]: E1201 00:10:01.619479 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-44ddl" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" Dec 01 00:10:01 crc kubenswrapper[4846]: I1201 00:10:01.834636 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b" exitCode=0 Dec 01 00:10:01 crc kubenswrapper[4846]: I1201 00:10:01.834759 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b"} Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.796802 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-44ddl" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.796676 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sdf9f" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.800629 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.800851 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jctc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8j266_openshift-marketplace(54eba203-c984-4df6-91bc-ba04e655e541): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.802088 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8j266" podUID="54eba203-c984-4df6-91bc-ba04e655e541" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.832590 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.832791 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kc4kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ct8wp_openshift-marketplace(26dc41c0-b440-494e-9bfa-2a70f3e16040): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.833882 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ct8wp" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.852939 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8j266" podUID="54eba203-c984-4df6-91bc-ba04e655e541" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.852996 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ct8wp" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.879333 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.881753 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcv4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xjt44_openshift-marketplace(02d89d24-0fd5-41c1-a392-27a63409d1c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:10:03 crc kubenswrapper[4846]: E1201 00:10:03.886584 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xjt44" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" Dec 01 00:10:04 crc kubenswrapper[4846]: I1201 00:10:04.066439 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rl69z"] Dec 01 00:10:04 crc kubenswrapper[4846]: I1201 00:10:04.318344 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 01 00:10:04 crc kubenswrapper[4846]: I1201 00:10:04.348335 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 01 00:10:04 crc kubenswrapper[4846]: W1201 00:10:04.825111 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf98ada12_3624_482c_ba58_8f0df881923c.slice/crio-47c528a7e849050e262d924c6ee43456d9c19a7d6c3431480c003da32551beed WatchSource:0}: Error finding container 47c528a7e849050e262d924c6ee43456d9c19a7d6c3431480c003da32551beed: Status 404 returned error can't find the container with id 47c528a7e849050e262d924c6ee43456d9c19a7d6c3431480c003da32551beed Dec 01 00:10:04 crc kubenswrapper[4846]: I1201 00:10:04.855645 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef6fd7f6-5551-4b1a-b743-84778b664a26","Type":"ContainerStarted","Data":"92de7e1beb555e918e9c9c87c8f6f165c68a07efa149b4584c57d1fdaceded52"} Dec 01 00:10:04 crc kubenswrapper[4846]: I1201 00:10:04.859481 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rl69z" event={"ID":"219022f7-8f31-4021-9df8-733c23b34602","Type":"ContainerStarted","Data":"ba013982b10d50a1e48a6e50fe5e00408431de5bc495098f3dac27fee8eb0d9f"} Dec 01 00:10:04 crc kubenswrapper[4846]: I1201 00:10:04.861508 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f98ada12-3624-482c-ba58-8f0df881923c","Type":"ContainerStarted","Data":"47c528a7e849050e262d924c6ee43456d9c19a7d6c3431480c003da32551beed"} Dec 01 00:10:05 crc kubenswrapper[4846]: E1201 00:10:05.012711 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xjt44" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" Dec 01 00:10:05 crc kubenswrapper[4846]: I1201 00:10:05.870131 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"3ac2e5c683905e3f4d0a34f11ca9603ade698a0381b398171743ea10eb159b79"} Dec 01 00:10:07 crc kubenswrapper[4846]: I1201 00:10:07.905014 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerStarted","Data":"02e32f6cfb35110c893a7ecd9f5321d40ed5789e4be87d040e860c1c10b3b476"} Dec 01 00:10:07 crc kubenswrapper[4846]: I1201 00:10:07.907190 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerStarted","Data":"75cd310a4964dc2a4c149922f563428dff460bbd627b6d621138b092ee12e083"} Dec 01 00:10:07 crc kubenswrapper[4846]: I1201 00:10:07.908407 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rl69z" event={"ID":"219022f7-8f31-4021-9df8-733c23b34602","Type":"ContainerStarted","Data":"d187d4dbe96ae2e35c582beb48ffead0f545b03359be3ce7732b7bf4404e904c"} Dec 01 00:10:07 crc kubenswrapper[4846]: I1201 00:10:07.909451 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerStarted","Data":"9c3dc8efb391849d38d0f4119c6ed3520d560098eb01e7ac315c933a37aa987a"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.919832 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rl69z" event={"ID":"219022f7-8f31-4021-9df8-733c23b34602","Type":"ContainerStarted","Data":"cc21f7968b18ed6f4bbaa84a95b48c29231765483538c5fa1d508b7bc89e21df"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.922402 4846 generic.go:334] "Generic (PLEG): container finished" podID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerID="9c3dc8efb391849d38d0f4119c6ed3520d560098eb01e7ac315c933a37aa987a" exitCode=0 Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.922510 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerDied","Data":"9c3dc8efb391849d38d0f4119c6ed3520d560098eb01e7ac315c933a37aa987a"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.924816 4846 generic.go:334] "Generic (PLEG): container finished" podID="45e0d9f8-77de-43d5-a450-e889f929df32" containerID="02e32f6cfb35110c893a7ecd9f5321d40ed5789e4be87d040e860c1c10b3b476" exitCode=0 Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.924878 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerDied","Data":"02e32f6cfb35110c893a7ecd9f5321d40ed5789e4be87d040e860c1c10b3b476"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.927072 4846 generic.go:334] "Generic (PLEG): container finished" podID="f98ada12-3624-482c-ba58-8f0df881923c" containerID="e34dce2ec399f5ea98fc5d85b099bd1a1d4ea5b11473a12a81a24cab496e6ff7" exitCode=0 Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.927238 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f98ada12-3624-482c-ba58-8f0df881923c","Type":"ContainerDied","Data":"e34dce2ec399f5ea98fc5d85b099bd1a1d4ea5b11473a12a81a24cab496e6ff7"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.929558 4846 generic.go:334] "Generic (PLEG): container finished" podID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerID="75cd310a4964dc2a4c149922f563428dff460bbd627b6d621138b092ee12e083" exitCode=0 Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.929599 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerDied","Data":"75cd310a4964dc2a4c149922f563428dff460bbd627b6d621138b092ee12e083"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.931256 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef6fd7f6-5551-4b1a-b743-84778b664a26","Type":"ContainerStarted","Data":"9583b5368430f210ab7615100c50c8344a928a8f228832d20815fafeaade8e9e"} Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.947030 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rl69z" podStartSLOduration=204.947002115 podStartE2EDuration="3m24.947002115s" podCreationTimestamp="2025-12-01 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:10:08.940206562 +0000 UTC m=+229.720975646" watchObservedRunningTime="2025-12-01 00:10:08.947002115 +0000 UTC m=+229.727771199" Dec 01 00:10:08 crc kubenswrapper[4846]: I1201 00:10:08.979635 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=26.979608501 podStartE2EDuration="26.979608501s" podCreationTimestamp="2025-12-01 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:10:08.973249482 +0000 UTC m=+229.754018556" watchObservedRunningTime="2025-12-01 00:10:08.979608501 +0000 UTC m=+229.760377585" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.174121 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.338914 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98ada12-3624-482c-ba58-8f0df881923c-kube-api-access\") pod \"f98ada12-3624-482c-ba58-8f0df881923c\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.338982 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f98ada12-3624-482c-ba58-8f0df881923c-kubelet-dir\") pod \"f98ada12-3624-482c-ba58-8f0df881923c\" (UID: \"f98ada12-3624-482c-ba58-8f0df881923c\") " Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.339181 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f98ada12-3624-482c-ba58-8f0df881923c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f98ada12-3624-482c-ba58-8f0df881923c" (UID: "f98ada12-3624-482c-ba58-8f0df881923c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.353175 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98ada12-3624-482c-ba58-8f0df881923c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f98ada12-3624-482c-ba58-8f0df881923c" (UID: "f98ada12-3624-482c-ba58-8f0df881923c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.441217 4846 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f98ada12-3624-482c-ba58-8f0df881923c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.441285 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f98ada12-3624-482c-ba58-8f0df881923c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.950466 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f98ada12-3624-482c-ba58-8f0df881923c","Type":"ContainerDied","Data":"47c528a7e849050e262d924c6ee43456d9c19a7d6c3431480c003da32551beed"} Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.950535 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47c528a7e849050e262d924c6ee43456d9c19a7d6c3431480c003da32551beed" Dec 01 00:10:10 crc kubenswrapper[4846]: I1201 00:10:10.950538 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 01 00:10:12 crc kubenswrapper[4846]: I1201 00:10:12.969125 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerStarted","Data":"f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e"} Dec 01 00:10:12 crc kubenswrapper[4846]: I1201 00:10:12.992666 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-95f2l" podStartSLOduration=6.256965248 podStartE2EDuration="1m15.992629507s" podCreationTimestamp="2025-12-01 00:08:57 +0000 UTC" firstStartedPulling="2025-12-01 00:09:00.418493701 +0000 UTC m=+161.199262775" lastFinishedPulling="2025-12-01 00:10:10.15415796 +0000 UTC m=+230.934927034" observedRunningTime="2025-12-01 00:10:12.989424047 +0000 UTC m=+233.770193121" watchObservedRunningTime="2025-12-01 00:10:12.992629507 +0000 UTC m=+233.773398581" Dec 01 00:10:13 crc kubenswrapper[4846]: I1201 00:10:13.975530 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerStarted","Data":"066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e"} Dec 01 00:10:13 crc kubenswrapper[4846]: I1201 00:10:13.979488 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerStarted","Data":"bed8de3152113eccad10a290f691d45faf809d7b44c90a61b8355d777f9420c0"} Dec 01 00:10:14 crc kubenswrapper[4846]: I1201 00:10:14.002078 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-97xhs" podStartSLOduration=4.378693291 podStartE2EDuration="1m21.002063946s" podCreationTimestamp="2025-12-01 00:08:53 +0000 UTC" firstStartedPulling="2025-12-01 00:08:56.233093265 +0000 UTC m=+157.013862339" lastFinishedPulling="2025-12-01 00:10:12.85646392 +0000 UTC m=+233.637232994" observedRunningTime="2025-12-01 00:10:14.000657163 +0000 UTC m=+234.781426237" watchObservedRunningTime="2025-12-01 00:10:14.002063946 +0000 UTC m=+234.782833020" Dec 01 00:10:14 crc kubenswrapper[4846]: I1201 00:10:14.003102 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:10:14 crc kubenswrapper[4846]: I1201 00:10:14.003284 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:10:14 crc kubenswrapper[4846]: I1201 00:10:14.030849 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w968z" podStartSLOduration=4.338509509 podStartE2EDuration="1m21.030823013s" podCreationTimestamp="2025-12-01 00:08:53 +0000 UTC" firstStartedPulling="2025-12-01 00:08:56.270658948 +0000 UTC m=+157.051428022" lastFinishedPulling="2025-12-01 00:10:12.962972452 +0000 UTC m=+233.743741526" observedRunningTime="2025-12-01 00:10:14.025847568 +0000 UTC m=+234.806616662" watchObservedRunningTime="2025-12-01 00:10:14.030823013 +0000 UTC m=+234.811592087" Dec 01 00:10:15 crc kubenswrapper[4846]: I1201 00:10:15.283404 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-97xhs" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" probeResult="failure" output=< Dec 01 00:10:15 crc kubenswrapper[4846]: timeout: failed to connect service ":50051" within 1s Dec 01 00:10:15 crc kubenswrapper[4846]: > Dec 01 00:10:16 crc kubenswrapper[4846]: I1201 00:10:16.997242 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerStarted","Data":"9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc"} Dec 01 00:10:17 crc kubenswrapper[4846]: I1201 00:10:17.393967 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:10:17 crc kubenswrapper[4846]: I1201 00:10:17.394463 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:10:18 crc kubenswrapper[4846]: I1201 00:10:18.005120 4846 generic.go:334] "Generic (PLEG): container finished" podID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerID="9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc" exitCode=0 Dec 01 00:10:18 crc kubenswrapper[4846]: I1201 00:10:18.005178 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerDied","Data":"9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc"} Dec 01 00:10:18 crc kubenswrapper[4846]: I1201 00:10:18.943299 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-95f2l" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="registry-server" probeResult="failure" output=< Dec 01 00:10:18 crc kubenswrapper[4846]: timeout: failed to connect service ":50051" within 1s Dec 01 00:10:18 crc kubenswrapper[4846]: > Dec 01 00:10:23 crc kubenswrapper[4846]: I1201 00:10:23.642816 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:10:23 crc kubenswrapper[4846]: I1201 00:10:23.643171 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:10:24 crc kubenswrapper[4846]: I1201 00:10:24.814361 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w968z" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="registry-server" probeResult="failure" output=< Dec 01 00:10:24 crc kubenswrapper[4846]: timeout: failed to connect service ":50051" within 1s Dec 01 00:10:24 crc kubenswrapper[4846]: > Dec 01 00:10:25 crc kubenswrapper[4846]: I1201 00:10:25.069862 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-97xhs" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" probeResult="failure" output=< Dec 01 00:10:25 crc kubenswrapper[4846]: timeout: failed to connect service ":50051" within 1s Dec 01 00:10:25 crc kubenswrapper[4846]: > Dec 01 00:10:27 crc kubenswrapper[4846]: I1201 00:10:27.453836 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:10:27 crc kubenswrapper[4846]: I1201 00:10:27.509889 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:10:28 crc kubenswrapper[4846]: I1201 00:10:28.275753 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-95f2l"] Dec 01 00:10:29 crc kubenswrapper[4846]: I1201 00:10:29.073081 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-95f2l" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="registry-server" containerID="cri-o://f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e" gracePeriod=2 Dec 01 00:10:33 crc kubenswrapper[4846]: I1201 00:10:33.680606 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:10:33 crc kubenswrapper[4846]: I1201 00:10:33.730463 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:10:34 crc kubenswrapper[4846]: I1201 00:10:34.060283 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:10:34 crc kubenswrapper[4846]: I1201 00:10:34.126506 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:10:34 crc kubenswrapper[4846]: I1201 00:10:34.914096 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-97xhs"] Dec 01 00:10:35 crc kubenswrapper[4846]: I1201 00:10:35.110346 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-97xhs" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" containerID="cri-o://066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e" gracePeriod=2 Dec 01 00:10:37 crc kubenswrapper[4846]: E1201 00:10:37.395289 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e is running failed: container process not found" containerID="f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:10:37 crc kubenswrapper[4846]: E1201 00:10:37.396547 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e is running failed: container process not found" containerID="f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:10:37 crc kubenswrapper[4846]: E1201 00:10:37.397259 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e is running failed: container process not found" containerID="f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:10:37 crc kubenswrapper[4846]: E1201 00:10:37.397373 4846 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-95f2l" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="registry-server" Dec 01 00:10:40 crc kubenswrapper[4846]: I1201 00:10:40.151564 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-95f2l_6fe501d3-5ba5-4617-a355-0f69d5737dc4/registry-server/0.log" Dec 01 00:10:40 crc kubenswrapper[4846]: I1201 00:10:40.153146 4846 generic.go:334] "Generic (PLEG): container finished" podID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerID="f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e" exitCode=137 Dec 01 00:10:40 crc kubenswrapper[4846]: I1201 00:10:40.153186 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerDied","Data":"f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e"} Dec 01 00:10:43 crc kubenswrapper[4846]: I1201 00:10:43.178049 4846 generic.go:334] "Generic (PLEG): container finished" podID="45e0d9f8-77de-43d5-a450-e889f929df32" containerID="066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e" exitCode=0 Dec 01 00:10:43 crc kubenswrapper[4846]: I1201 00:10:43.178152 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerDied","Data":"066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e"} Dec 01 00:10:44 crc kubenswrapper[4846]: E1201 00:10:44.004942 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e is running failed: container process not found" containerID="066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:10:44 crc kubenswrapper[4846]: E1201 00:10:44.005539 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e is running failed: container process not found" containerID="066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:10:44 crc kubenswrapper[4846]: E1201 00:10:44.005872 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e is running failed: container process not found" containerID="066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:10:44 crc kubenswrapper[4846]: E1201 00:10:44.005899 4846 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-97xhs" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.525069 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-95f2l_6fe501d3-5ba5-4617-a355-0f69d5737dc4/registry-server/0.log" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.525962 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.663987 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-utilities\") pod \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.664044 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lss6\" (UniqueName: \"kubernetes.io/projected/6fe501d3-5ba5-4617-a355-0f69d5737dc4-kube-api-access-8lss6\") pod \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.664221 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-catalog-content\") pod \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\" (UID: \"6fe501d3-5ba5-4617-a355-0f69d5737dc4\") " Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.665192 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-utilities" (OuterVolumeSpecName: "utilities") pod "6fe501d3-5ba5-4617-a355-0f69d5737dc4" (UID: "6fe501d3-5ba5-4617-a355-0f69d5737dc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.670997 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe501d3-5ba5-4617-a355-0f69d5737dc4-kube-api-access-8lss6" (OuterVolumeSpecName: "kube-api-access-8lss6") pod "6fe501d3-5ba5-4617-a355-0f69d5737dc4" (UID: "6fe501d3-5ba5-4617-a355-0f69d5737dc4"). InnerVolumeSpecName "kube-api-access-8lss6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.703312 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.766033 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.766072 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lss6\" (UniqueName: \"kubernetes.io/projected/6fe501d3-5ba5-4617-a355-0f69d5737dc4-kube-api-access-8lss6\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.794560 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fe501d3-5ba5-4617-a355-0f69d5737dc4" (UID: "6fe501d3-5ba5-4617-a355-0f69d5737dc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.866840 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-utilities\") pod \"45e0d9f8-77de-43d5-a450-e889f929df32\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.866880 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chdt\" (UniqueName: \"kubernetes.io/projected/45e0d9f8-77de-43d5-a450-e889f929df32-kube-api-access-4chdt\") pod \"45e0d9f8-77de-43d5-a450-e889f929df32\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.866967 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-catalog-content\") pod \"45e0d9f8-77de-43d5-a450-e889f929df32\" (UID: \"45e0d9f8-77de-43d5-a450-e889f929df32\") " Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.867214 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fe501d3-5ba5-4617-a355-0f69d5737dc4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.867738 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-utilities" (OuterVolumeSpecName: "utilities") pod "45e0d9f8-77de-43d5-a450-e889f929df32" (UID: "45e0d9f8-77de-43d5-a450-e889f929df32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.872952 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e0d9f8-77de-43d5-a450-e889f929df32-kube-api-access-4chdt" (OuterVolumeSpecName: "kube-api-access-4chdt") pod "45e0d9f8-77de-43d5-a450-e889f929df32" (UID: "45e0d9f8-77de-43d5-a450-e889f929df32"). InnerVolumeSpecName "kube-api-access-4chdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.918948 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45e0d9f8-77de-43d5-a450-e889f929df32" (UID: "45e0d9f8-77de-43d5-a450-e889f929df32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.968882 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.968935 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chdt\" (UniqueName: \"kubernetes.io/projected/45e0d9f8-77de-43d5-a450-e889f929df32-kube-api-access-4chdt\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:44 crc kubenswrapper[4846]: I1201 00:10:44.968954 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e0d9f8-77de-43d5-a450-e889f929df32-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.191436 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-97xhs" event={"ID":"45e0d9f8-77de-43d5-a450-e889f929df32","Type":"ContainerDied","Data":"1f115b8b6eb17d6aafb1cbe6bf4a07ec8c332d05cb8596ceabfdcb7a542d6638"} Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.191464 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-97xhs" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.191502 4846 scope.go:117] "RemoveContainer" containerID="066ead1a400efd677d460cb3057a4573cc33124faac4e60aad1cdfb6d852303e" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.193146 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-95f2l_6fe501d3-5ba5-4617-a355-0f69d5737dc4/registry-server/0.log" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.194147 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-95f2l" event={"ID":"6fe501d3-5ba5-4617-a355-0f69d5737dc4","Type":"ContainerDied","Data":"091a5bb0bdcd094ee559a0e90f91d2688e2e048539e3cf57d68e79641415b292"} Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.194210 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-95f2l" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.245708 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-95f2l"] Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.254720 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-95f2l"] Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.259433 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-97xhs"] Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.262740 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-97xhs"] Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.589234 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" path="/var/lib/kubelet/pods/45e0d9f8-77de-43d5-a450-e889f929df32/volumes" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.590079 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" path="/var/lib/kubelet/pods/6fe501d3-5ba5-4617-a355-0f69d5737dc4/volumes" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.688449 4846 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.688897 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689007 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.689021 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98ada12-3624-482c-ba58-8f0df881923c" containerName="pruner" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689027 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98ada12-3624-482c-ba58-8f0df881923c" containerName="pruner" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.689038 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="extract-content" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689045 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="extract-content" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.689052 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="extract-content" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689058 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="extract-content" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.689167 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="registry-server" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689175 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="registry-server" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.689186 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="extract-utilities" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689192 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="extract-utilities" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.689203 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="extract-utilities" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689211 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="extract-utilities" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689499 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe501d3-5ba5-4617-a355-0f69d5737dc4" containerName="registry-server" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689519 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98ada12-3624-482c-ba58-8f0df881923c" containerName="pruner" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689529 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e0d9f8-77de-43d5-a450-e889f929df32" containerName="registry-server" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.689919 4846 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.690200 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6" gracePeriod=15 Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.690361 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.690466 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33" gracePeriod=15 Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.690703 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede" gracePeriod=15 Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.690767 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9" gracePeriod=15 Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.690819 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad" gracePeriod=15 Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.693823 4846 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694209 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694238 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694258 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694271 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694290 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694302 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694316 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694326 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694342 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694352 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694370 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694381 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 00:10:45 crc kubenswrapper[4846]: E1201 00:10:45.694396 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694408 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694591 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694610 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694624 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694639 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694655 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.694665 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.738616 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.778974 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779042 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779111 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779138 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779160 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779204 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779237 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.779271 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880215 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880291 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880363 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880400 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880479 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880530 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880559 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880615 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880761 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880810 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880867 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.880968 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.881006 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.881059 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.881073 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:45 crc kubenswrapper[4846]: I1201 00:10:45.881092 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.037208 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.113830 4846 scope.go:117] "RemoveContainer" containerID="02e32f6cfb35110c893a7ecd9f5321d40ed5789e4be87d040e860c1c10b3b476" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.115238 4846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-xjt44.187ceeeac4ccab24 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-xjt44,UID:02d89d24-0fd5-41c1-a392-27a63409d1c3,APIVersion:v1,ResourceVersion:28011,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.531s (30.532s including waiting). Image size: 1202665305 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,LastTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 00:10:46 crc kubenswrapper[4846]: W1201 00:10:46.197955 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9e1e1a0eb30d7090dac2441cb696df45ec6339766a832916f66a5b83ab84cfe3 WatchSource:0}: Error finding container 9e1e1a0eb30d7090dac2441cb696df45ec6339766a832916f66a5b83ab84cfe3: Status 404 returned error can't find the container with id 9e1e1a0eb30d7090dac2441cb696df45ec6339766a832916f66a5b83ab84cfe3 Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.203639 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.205344 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.205956 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad" exitCode=2 Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.295174 4846 scope.go:117] "RemoveContainer" containerID="025e1d9215763f6f6621f54a95fe598643f1f08fa734a025c34b16f2f144321d" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.341186 4846 scope.go:117] "RemoveContainer" containerID="f1979bbaf4b85be5182989981360c73dca31837d31b4da2d0c11a115989f065e" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.359862 4846 scope.go:117] "RemoveContainer" containerID="9c3dc8efb391849d38d0f4119c6ed3520d560098eb01e7ac315c933a37aa987a" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.393399 4846 scope.go:117] "RemoveContainer" containerID="5cc9484a1ab26b7c42b52fc0cfec5995db2ead4b6735fa26e39a9580a0f340d9" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.555091 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202665305},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:4ad71786e41377bb6599aba75541c8fa3e19235efe5e472e7509992850479bf1\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:93aeee8c880c0992e238ed49370a486420eac0ba21279d795936df02d6fc07f4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201199947},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.555632 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.555960 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.556404 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.556846 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:46 crc kubenswrapper[4846]: E1201 00:10:46.556887 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.594319 4846 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 01 00:10:46 crc kubenswrapper[4846]: I1201 00:10:46.594413 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.225424 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9e1e1a0eb30d7090dac2441cb696df45ec6339766a832916f66a5b83ab84cfe3"} Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.227838 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.230034 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.230660 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33" exitCode=0 Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.230711 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9" exitCode=0 Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.230721 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede" exitCode=0 Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.230775 4846 scope.go:117] "RemoveContainer" containerID="cd21a41c6d909636be8f3c0037540c042884ee9aec039bb6cc8da2cf44693915" Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.232983 4846 generic.go:334] "Generic (PLEG): container finished" podID="ef6fd7f6-5551-4b1a-b743-84778b664a26" containerID="9583b5368430f210ab7615100c50c8344a928a8f228832d20815fafeaade8e9e" exitCode=0 Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.233047 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef6fd7f6-5551-4b1a-b743-84778b664a26","Type":"ContainerDied","Data":"9583b5368430f210ab7615100c50c8344a928a8f228832d20815fafeaade8e9e"} Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.234184 4846 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.234614 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:47 crc kubenswrapper[4846]: I1201 00:10:47.234933 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.245567 4846 generic.go:334] "Generic (PLEG): container finished" podID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerID="189ac41019b342ce1145cdcf53bc70a4ba074d01f2323eed0fd143396590f74f" exitCode=0 Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.245656 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdf9f" event={"ID":"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8","Type":"ContainerDied","Data":"189ac41019b342ce1145cdcf53bc70a4ba074d01f2323eed0fd143396590f74f"} Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.247784 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.248153 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.248629 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.249670 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerStarted","Data":"4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093"} Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.250615 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.250928 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.251399 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.251662 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.252816 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerStarted","Data":"b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c"} Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.253311 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.253475 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.253847 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.254191 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.254670 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ce9f894a4cc4f0c6545dd054e6ef6b93cdd5682e919e3b94dcbf356d3b93a8e3"} Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.254740 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.255180 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.255474 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.255805 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.256129 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: I1201 00:10:48.256459 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:48 crc kubenswrapper[4846]: E1201 00:10:48.977369 4846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-xjt44.187ceeeac4ccab24 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-xjt44,UID:02d89d24-0fd5-41c1-a392-27a63409d1c3,APIVersion:v1,ResourceVersion:28011,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.531s (30.532s including waiting). Image size: 1202665305 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,LastTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.265552 4846 generic.go:334] "Generic (PLEG): container finished" podID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerID="b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c" exitCode=0 Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.265886 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerDied","Data":"b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c"} Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.266876 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.267179 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.267519 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.267814 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.268093 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.268721 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ef6fd7f6-5551-4b1a-b743-84778b664a26","Type":"ContainerDied","Data":"92de7e1beb555e918e9c9c87c8f6f165c68a07efa149b4584c57d1fdaceded52"} Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.268767 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92de7e1beb555e918e9c9c87c8f6f165c68a07efa149b4584c57d1fdaceded52" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.272458 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.273538 4846 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6" exitCode=0 Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.316655 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.317192 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.317436 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.317638 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.317842 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.318075 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.430895 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-var-lock\") pod \"ef6fd7f6-5551-4b1a-b743-84778b664a26\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.431008 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-kubelet-dir\") pod \"ef6fd7f6-5551-4b1a-b743-84778b664a26\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.431030 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-var-lock" (OuterVolumeSpecName: "var-lock") pod "ef6fd7f6-5551-4b1a-b743-84778b664a26" (UID: "ef6fd7f6-5551-4b1a-b743-84778b664a26"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.431061 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef6fd7f6-5551-4b1a-b743-84778b664a26-kube-api-access\") pod \"ef6fd7f6-5551-4b1a-b743-84778b664a26\" (UID: \"ef6fd7f6-5551-4b1a-b743-84778b664a26\") " Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.431142 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef6fd7f6-5551-4b1a-b743-84778b664a26" (UID: "ef6fd7f6-5551-4b1a-b743-84778b664a26"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.431278 4846 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.431289 4846 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef6fd7f6-5551-4b1a-b743-84778b664a26-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.439098 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6fd7f6-5551-4b1a-b743-84778b664a26-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef6fd7f6-5551-4b1a-b743-84778b664a26" (UID: "ef6fd7f6-5551-4b1a-b743-84778b664a26"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.533386 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef6fd7f6-5551-4b1a-b743-84778b664a26-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.585371 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.585966 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.586473 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.588020 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:49 crc kubenswrapper[4846]: I1201 00:10:49.588447 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.277663 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.281786 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.281951 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.282096 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.282238 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.282376 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.576354 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.577380 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.578126 4846 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.578472 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.578929 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.579304 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.579646 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.580016 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.749874 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.750005 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.750065 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.750585 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.751041 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.751102 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.852108 4846 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.852154 4846 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:50 crc kubenswrapper[4846]: I1201 00:10:50.852169 4846 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.285980 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.287172 4846 scope.go:117] "RemoveContainer" containerID="fedea3f7bfc7dc09d609b5717cb93ffc6358c3b6594f62eae445bdc7bce28b33" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.287385 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.302117 4846 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.302367 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.302518 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.303098 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.303367 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.303701 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:51 crc kubenswrapper[4846]: I1201 00:10:51.592639 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 01 00:10:52 crc kubenswrapper[4846]: E1201 00:10:52.900582 4846 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:52 crc kubenswrapper[4846]: E1201 00:10:52.903221 4846 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:52 crc kubenswrapper[4846]: E1201 00:10:52.903899 4846 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:52 crc kubenswrapper[4846]: E1201 00:10:52.904505 4846 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:52 crc kubenswrapper[4846]: E1201 00:10:52.904975 4846 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:52 crc kubenswrapper[4846]: I1201 00:10:52.905039 4846 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 01 00:10:52 crc kubenswrapper[4846]: E1201 00:10:52.905493 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Dec 01 00:10:53 crc kubenswrapper[4846]: E1201 00:10:53.106825 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Dec 01 00:10:53 crc kubenswrapper[4846]: E1201 00:10:53.508316 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Dec 01 00:10:54 crc kubenswrapper[4846]: E1201 00:10:54.309208 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Dec 01 00:10:55 crc kubenswrapper[4846]: E1201 00:10:55.910337 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.944991 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.945036 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.992628 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.993186 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.993495 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.994015 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.994293 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:55 crc kubenswrapper[4846]: I1201 00:10:55.994603 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: I1201 00:10:56.373085 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:10:56 crc kubenswrapper[4846]: I1201 00:10:56.373874 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: I1201 00:10:56.374501 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: I1201 00:10:56.374939 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: I1201 00:10:56.375225 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: I1201 00:10:56.375550 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: E1201 00:10:56.572330 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:10:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202665305},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:4ad71786e41377bb6599aba75541c8fa3e19235efe5e472e7509992850479bf1\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:93aeee8c880c0992e238ed49370a486420eac0ba21279d795936df02d6fc07f4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201199947},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: E1201 00:10:56.573471 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: E1201 00:10:56.574714 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: E1201 00:10:56.575155 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: E1201 00:10:56.575527 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:56 crc kubenswrapper[4846]: E1201 00:10:56.575714 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:10:58 crc kubenswrapper[4846]: E1201 00:10:58.979364 4846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-xjt44.187ceeeac4ccab24 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-xjt44,UID:02d89d24-0fd5-41c1-a392-27a63409d1c3,APIVersion:v1,ResourceVersion:28011,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.531s (30.532s including waiting). Image size: 1202665305 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,LastTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.092728 4846 scope.go:117] "RemoveContainer" containerID="61c0654d11477d2b34f2769c668da00564cabb9fe2ec7237ad7b5d0855b288c9" Dec 01 00:10:59 crc kubenswrapper[4846]: E1201 00:10:59.111785 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.350029 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.580816 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.584536 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.585046 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.585375 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.586354 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.586788 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.587263 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.587743 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.588434 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.588961 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.589323 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.609518 4846 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.609548 4846 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:10:59 crc kubenswrapper[4846]: E1201 00:10:59.610037 4846 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:10:59 crc kubenswrapper[4846]: I1201 00:10:59.610603 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.363027 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.364041 4846 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81" exitCode=1 Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.364116 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81"} Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.365052 4846 scope.go:117] "RemoveContainer" containerID="792d2adb549f86b2519bae725b8fba4aa59c7e5b690adba14750a9a8bb1d3e81" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.365725 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.366598 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.367362 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.368124 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.368647 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.369120 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:00 crc kubenswrapper[4846]: I1201 00:11:00.534923 4846 scope.go:117] "RemoveContainer" containerID="8117c86ec24ea06a9982733f7bfd8276eb38eba412f6b4197f188b0a9c46fede" Dec 01 00:11:06 crc kubenswrapper[4846]: I1201 00:11:01.370458 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 01 00:11:06 crc kubenswrapper[4846]: I1201 00:11:01.706004 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:06 crc kubenswrapper[4846]: I1201 00:11:05.075553 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:05.513458 4846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="7s" Dec 01 00:11:06 crc kubenswrapper[4846]: I1201 00:11:06.428736 4846 scope.go:117] "RemoveContainer" containerID="46153d30e11d707ebd6d7d0afdabf7213940d7407c96cc184687a4dbc687afad" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:06.890021 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:11:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:11:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:11:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-01T00:11:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1202665305},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:4ad71786e41377bb6599aba75541c8fa3e19235efe5e472e7509992850479bf1\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:93aeee8c880c0992e238ed49370a486420eac0ba21279d795936df02d6fc07f4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1201199947},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:06.891165 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:06.891796 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:06.892026 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:06.892383 4846 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:06 crc kubenswrapper[4846]: E1201 00:11:06.892414 4846 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 01 00:11:07 crc kubenswrapper[4846]: I1201 00:11:07.051575 4846 scope.go:117] "RemoveContainer" containerID="e0d9e07b88c8535c839521296c95bfd3f667bbc03d5fd2c6e1613a2ec8ce4df6" Dec 01 00:11:07 crc kubenswrapper[4846]: W1201 00:11:07.056020 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-304d45df77e194584647db93eb77fcd562a31681488ff8420af33158c2fd5ae1 WatchSource:0}: Error finding container 304d45df77e194584647db93eb77fcd562a31681488ff8420af33158c2fd5ae1: Status 404 returned error can't find the container with id 304d45df77e194584647db93eb77fcd562a31681488ff8420af33158c2fd5ae1 Dec 01 00:11:07 crc kubenswrapper[4846]: I1201 00:11:07.410583 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"304d45df77e194584647db93eb77fcd562a31681488ff8420af33158c2fd5ae1"} Dec 01 00:11:07 crc kubenswrapper[4846]: I1201 00:11:07.445658 4846 scope.go:117] "RemoveContainer" containerID="c588f3820a1301dab8324e88ff7e40052dd95b5a41a31f9eb68fafeadefca4b5" Dec 01 00:11:08 crc kubenswrapper[4846]: E1201 00:11:08.980569 4846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-xjt44.187ceeeac4ccab24 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-xjt44,UID:02d89d24-0fd5-41c1-a392-27a63409d1c3,APIVersion:v1,ResourceVersion:28011,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 30.531s (30.532s including waiting). Image size: 1202665305 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,LastTimestamp:2025-12-01 00:10:46.113897252 +0000 UTC m=+266.894666346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.071398 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.436499 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.436856 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3dc4192bea760ad31ee1d4ef1c19602e45387b449ab444b8d27eb0b137def6a5"} Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.437730 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.438172 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.438438 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.438672 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.438916 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.439150 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.440273 4846 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e3041ca2d1cfd7f9456fddb4fb1290a28ea5cbce1bfde9b9ce6b36cdd7895184" exitCode=0 Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.440320 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e3041ca2d1cfd7f9456fddb4fb1290a28ea5cbce1bfde9b9ce6b36cdd7895184"} Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.440530 4846 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.440550 4846 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:11:09 crc kubenswrapper[4846]: E1201 00:11:09.440825 4846 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.441036 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.441319 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.441579 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.441930 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.442164 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.442396 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.444470 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdf9f" event={"ID":"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8","Type":"ContainerStarted","Data":"da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea"} Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.445615 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.446023 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.446273 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.446564 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.446832 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.446980 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.447411 4846 generic.go:334] "Generic (PLEG): container finished" podID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerID="41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162" exitCode=0 Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.447454 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct8wp" event={"ID":"26dc41c0-b440-494e-9bfa-2a70f3e16040","Type":"ContainerDied","Data":"41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162"} Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.448229 4846 status_manager.go:851] "Failed to get status for pod" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" pod="openshift-marketplace/certified-operators-ct8wp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ct8wp\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.448768 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.449085 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.449499 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.449808 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.450546 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.450889 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.451224 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerStarted","Data":"43dc27abd9f8bc32e13187d24c38b4d4b91dcb8b9a85fa408e6589bdcb19bcf4"} Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.451838 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.452102 4846 status_manager.go:851] "Failed to get status for pod" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" pod="openshift-marketplace/certified-operators-ct8wp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ct8wp\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.452372 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.452661 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.452912 4846 status_manager.go:851] "Failed to get status for pod" podUID="54eba203-c984-4df6-91bc-ba04e655e541" pod="openshift-marketplace/redhat-operators-8j266" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8j266\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.453154 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.453457 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.453706 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.587976 4846 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.588344 4846 status_manager.go:851] "Failed to get status for pod" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" pod="openshift-marketplace/certified-operators-ct8wp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ct8wp\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.588491 4846 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.588636 4846 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.588823 4846 status_manager.go:851] "Failed to get status for pod" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" pod="openshift-marketplace/redhat-marketplace-44ddl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-44ddl\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.588985 4846 status_manager.go:851] "Failed to get status for pod" podUID="54eba203-c984-4df6-91bc-ba04e655e541" pod="openshift-marketplace/redhat-operators-8j266" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-8j266\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.589142 4846 status_manager.go:851] "Failed to get status for pod" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" pod="openshift-marketplace/redhat-marketplace-sdf9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdf9f\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.589845 4846 status_manager.go:851] "Failed to get status for pod" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:09 crc kubenswrapper[4846]: I1201 00:11:09.590026 4846 status_manager.go:851] "Failed to get status for pod" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" pod="openshift-marketplace/certified-operators-xjt44" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xjt44\": dial tcp 38.102.83.177:6443: connect: connection refused" Dec 01 00:11:10 crc kubenswrapper[4846]: I1201 00:11:10.461024 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerStarted","Data":"ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db"} Dec 01 00:11:10 crc kubenswrapper[4846]: I1201 00:11:10.463792 4846 generic.go:334] "Generic (PLEG): container finished" podID="54eba203-c984-4df6-91bc-ba04e655e541" containerID="43dc27abd9f8bc32e13187d24c38b4d4b91dcb8b9a85fa408e6589bdcb19bcf4" exitCode=0 Dec 01 00:11:10 crc kubenswrapper[4846]: I1201 00:11:10.463881 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerDied","Data":"43dc27abd9f8bc32e13187d24c38b4d4b91dcb8b9a85fa408e6589bdcb19bcf4"} Dec 01 00:11:10 crc kubenswrapper[4846]: I1201 00:11:10.466481 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d51469ae30f5b923ca8031c6970bde21c0f63141474d2efd452bb31f2d3455d"} Dec 01 00:11:11 crc kubenswrapper[4846]: I1201 00:11:11.473579 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91df75dfe665d7443ecd5abe9ab37bb18be23d3cd121140222762cc80b0b8462"} Dec 01 00:11:11 crc kubenswrapper[4846]: I1201 00:11:11.705896 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:13 crc kubenswrapper[4846]: I1201 00:11:13.735147 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:11:13 crc kubenswrapper[4846]: I1201 00:11:13.735671 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:11:13 crc kubenswrapper[4846]: I1201 00:11:13.789090 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:11:14 crc kubenswrapper[4846]: I1201 00:11:14.492050 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct8wp" event={"ID":"26dc41c0-b440-494e-9bfa-2a70f3e16040","Type":"ContainerStarted","Data":"1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053"} Dec 01 00:11:14 crc kubenswrapper[4846]: I1201 00:11:14.494541 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e714ecd19e2acdf8dbf4f40d60da7e049a5e6aa8285ff1175bfdac29cd20e18d"} Dec 01 00:11:15 crc kubenswrapper[4846]: I1201 00:11:15.540720 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:11:15 crc kubenswrapper[4846]: I1201 00:11:15.540777 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:11:15 crc kubenswrapper[4846]: I1201 00:11:15.586339 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:11:16 crc kubenswrapper[4846]: I1201 00:11:16.552253 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:11:19 crc kubenswrapper[4846]: I1201 00:11:19.070884 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:19 crc kubenswrapper[4846]: I1201 00:11:19.075697 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:19 crc kubenswrapper[4846]: I1201 00:11:19.534945 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 01 00:11:20 crc kubenswrapper[4846]: I1201 00:11:20.537561 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"68088b912b789b3109d35a5703da84dafd3bd562b8f98c98287aa16413dfe95d"} Dec 01 00:11:20 crc kubenswrapper[4846]: I1201 00:11:20.540319 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerStarted","Data":"d631510b16c37b023cafbca2e96442c3415816af2cf2b3fa67ccf3aa9a8a1d7a"} Dec 01 00:11:23 crc kubenswrapper[4846]: I1201 00:11:23.774951 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:11:24 crc kubenswrapper[4846]: I1201 00:11:24.183825 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:11:24 crc kubenswrapper[4846]: I1201 00:11:24.183904 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:11:24 crc kubenswrapper[4846]: I1201 00:11:24.219979 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:11:24 crc kubenswrapper[4846]: I1201 00:11:24.702041 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:11:28 crc kubenswrapper[4846]: I1201 00:11:28.590803 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c91e48b0a1d92a9a82400e7aa538996db46cb61208e26277082727fd4ec04169"} Dec 01 00:11:28 crc kubenswrapper[4846]: I1201 00:11:28.591390 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:11:28 crc kubenswrapper[4846]: I1201 00:11:28.591212 4846 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:11:28 crc kubenswrapper[4846]: I1201 00:11:28.591428 4846 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:11:28 crc kubenswrapper[4846]: I1201 00:11:28.597549 4846 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:11:28 crc kubenswrapper[4846]: I1201 00:11:28.615749 4846 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d51469ae30f5b923ca8031c6970bde21c0f63141474d2efd452bb31f2d3455d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e714ecd19e2acdf8dbf4f40d60da7e049a5e6aa8285ff1175bfdac29cd20e18d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91df75dfe665d7443ecd5abe9ab37bb18be23d3cd121140222762cc80b0b8462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91e48b0a1d92a9a82400e7aa538996db46cb61208e26277082727fd4ec04169\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:11:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68088b912b789b3109d35a5703da84dafd3bd562b8f98c98287aa16413dfe95d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-01T00:11:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"a25bf9be-7d8c-43a5-a9ed-76b3e32e2239\": field is immutable" Dec 01 00:11:29 crc kubenswrapper[4846]: I1201 00:11:29.595523 4846 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:11:29 crc kubenswrapper[4846]: I1201 00:11:29.595842 4846 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a25bf9be-7d8c-43a5-a9ed-76b3e32e2239" Dec 01 00:11:29 crc kubenswrapper[4846]: I1201 00:11:29.603333 4846 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="96f4490b-227e-40ba-88c7-682c4818c136" Dec 01 00:11:36 crc kubenswrapper[4846]: I1201 00:11:36.986588 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:11:36 crc kubenswrapper[4846]: I1201 00:11:36.987205 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:11:37 crc kubenswrapper[4846]: I1201 00:11:37.030382 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:11:37 crc kubenswrapper[4846]: I1201 00:11:37.683988 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:11:38 crc kubenswrapper[4846]: I1201 00:11:38.198103 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 01 00:11:41 crc kubenswrapper[4846]: I1201 00:11:41.076505 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 01 00:11:42 crc kubenswrapper[4846]: I1201 00:11:42.953636 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 00:11:46 crc kubenswrapper[4846]: I1201 00:11:46.195597 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 01 00:11:46 crc kubenswrapper[4846]: I1201 00:11:46.415241 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 01 00:11:46 crc kubenswrapper[4846]: I1201 00:11:46.909869 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 01 00:11:47 crc kubenswrapper[4846]: I1201 00:11:47.439658 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 01 00:11:47 crc kubenswrapper[4846]: I1201 00:11:47.473984 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 01 00:11:47 crc kubenswrapper[4846]: I1201 00:11:47.623772 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 01 00:11:48 crc kubenswrapper[4846]: I1201 00:11:48.289394 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 01 00:11:48 crc kubenswrapper[4846]: I1201 00:11:48.440649 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 01 00:11:48 crc kubenswrapper[4846]: I1201 00:11:48.653415 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 01 00:11:49 crc kubenswrapper[4846]: I1201 00:11:49.132751 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 01 00:11:49 crc kubenswrapper[4846]: I1201 00:11:49.369131 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 01 00:11:49 crc kubenswrapper[4846]: I1201 00:11:49.938288 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 01 00:11:49 crc kubenswrapper[4846]: I1201 00:11:49.975979 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 01 00:11:50 crc kubenswrapper[4846]: I1201 00:11:50.345934 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 01 00:11:50 crc kubenswrapper[4846]: I1201 00:11:50.463610 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 01 00:11:50 crc kubenswrapper[4846]: I1201 00:11:50.916043 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 01 00:11:50 crc kubenswrapper[4846]: I1201 00:11:50.939322 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 01 00:11:50 crc kubenswrapper[4846]: I1201 00:11:50.940475 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 01 00:11:50 crc kubenswrapper[4846]: I1201 00:11:50.988882 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 01 00:11:51 crc kubenswrapper[4846]: I1201 00:11:51.493541 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 01 00:11:51 crc kubenswrapper[4846]: I1201 00:11:51.548593 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 01 00:11:51 crc kubenswrapper[4846]: I1201 00:11:51.635010 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 01 00:11:51 crc kubenswrapper[4846]: I1201 00:11:51.714672 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 00:11:51 crc kubenswrapper[4846]: I1201 00:11:51.805898 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 01 00:11:51 crc kubenswrapper[4846]: I1201 00:11:51.907739 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 00:11:52 crc kubenswrapper[4846]: I1201 00:11:52.164858 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 01 00:11:52 crc kubenswrapper[4846]: I1201 00:11:52.268928 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 01 00:11:52 crc kubenswrapper[4846]: I1201 00:11:52.778280 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 01 00:11:53 crc kubenswrapper[4846]: I1201 00:11:53.072766 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 01 00:11:53 crc kubenswrapper[4846]: I1201 00:11:53.196577 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 01 00:11:53 crc kubenswrapper[4846]: I1201 00:11:53.510628 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 01 00:11:53 crc kubenswrapper[4846]: I1201 00:11:53.902211 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 01 00:11:53 crc kubenswrapper[4846]: I1201 00:11:53.952719 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.046404 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.072764 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.085577 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.350403 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.387175 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.465766 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.725284 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 01 00:11:54 crc kubenswrapper[4846]: I1201 00:11:54.890360 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.054032 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.175365 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.329159 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.386057 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.439108 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.678232 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 01 00:11:55 crc kubenswrapper[4846]: I1201 00:11:55.942862 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.024019 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.041253 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.068932 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.166602 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.176211 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.267209 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.389598 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.570603 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.613637 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.630034 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 01 00:11:56 crc kubenswrapper[4846]: I1201 00:11:56.886159 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.062665 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.173161 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.238171 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.337907 4846 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6rhj6 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.338410 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.338027 4846 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6rhj6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.338552 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.343733 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.352544 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.530904 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.545629 4846 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.639050 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.748566 4846 generic.go:334] "Generic (PLEG): container finished" podID="e31602df-d2bc-40de-93be-42600c22a9c1" containerID="803063f3b8156c1593a584d6d9c6be546f1be1f9df7b37d498b0b1c6270ccd38" exitCode=0 Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.748610 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerDied","Data":"803063f3b8156c1593a584d6d9c6be546f1be1f9df7b37d498b0b1c6270ccd38"} Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.749160 4846 scope.go:117] "RemoveContainer" containerID="803063f3b8156c1593a584d6d9c6be546f1be1f9df7b37d498b0b1c6270ccd38" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.775800 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.835632 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.860698 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 01 00:11:57 crc kubenswrapper[4846]: I1201 00:11:57.945978 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.084452 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.154517 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.222189 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.271972 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.563028 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.599351 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.665804 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.689706 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.695133 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.706194 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.758584 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6rhj6_e31602df-d2bc-40de-93be-42600c22a9c1/marketplace-operator/1.log" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.759952 4846 generic.go:334] "Generic (PLEG): container finished" podID="e31602df-d2bc-40de-93be-42600c22a9c1" containerID="062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018" exitCode=1 Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.760007 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerDied","Data":"062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018"} Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.760041 4846 scope.go:117] "RemoveContainer" containerID="803063f3b8156c1593a584d6d9c6be546f1be1f9df7b37d498b0b1c6270ccd38" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.760673 4846 scope.go:117] "RemoveContainer" containerID="062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018" Dec 01 00:11:58 crc kubenswrapper[4846]: E1201 00:11:58.760992 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6rhj6_openshift-marketplace(e31602df-d2bc-40de-93be-42600c22a9c1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.906888 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 01 00:11:58 crc kubenswrapper[4846]: I1201 00:11:58.930817 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.113348 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.362400 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.373420 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.396105 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.419807 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.500832 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.578103 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.664663 4846 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.666123 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8j266" podStartSLOduration=44.598202913 podStartE2EDuration="3m3.666109004s" podCreationTimestamp="2025-12-01 00:08:56 +0000 UTC" firstStartedPulling="2025-12-01 00:08:58.333237193 +0000 UTC m=+159.114006267" lastFinishedPulling="2025-12-01 00:11:17.401143294 +0000 UTC m=+298.181912358" observedRunningTime="2025-12-01 00:11:27.603272523 +0000 UTC m=+308.384041607" watchObservedRunningTime="2025-12-01 00:11:59.666109004 +0000 UTC m=+340.446878078" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.667307 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=74.667298762 podStartE2EDuration="1m14.667298762s" podCreationTimestamp="2025-12-01 00:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:11:16.81200914 +0000 UTC m=+297.592778214" watchObservedRunningTime="2025-12-01 00:11:59.667298762 +0000 UTC m=+340.448067826" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.667778 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44ddl" podStartSLOduration=78.384913864 podStartE2EDuration="3m4.667772787s" podCreationTimestamp="2025-12-01 00:08:55 +0000 UTC" firstStartedPulling="2025-12-01 00:08:58.295546056 +0000 UTC m=+159.076315130" lastFinishedPulling="2025-12-01 00:10:44.578404979 +0000 UTC m=+265.359174053" observedRunningTime="2025-12-01 00:11:16.709768774 +0000 UTC m=+297.490537858" watchObservedRunningTime="2025-12-01 00:11:59.667772787 +0000 UTC m=+340.448541861" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.668068 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjt44" podStartSLOduration=53.549426736 podStartE2EDuration="3m6.668063947s" podCreationTimestamp="2025-12-01 00:08:53 +0000 UTC" firstStartedPulling="2025-12-01 00:08:56.270744701 +0000 UTC m=+157.051513775" lastFinishedPulling="2025-12-01 00:11:09.389381912 +0000 UTC m=+290.170150986" observedRunningTime="2025-12-01 00:11:16.798662067 +0000 UTC m=+297.579431151" watchObservedRunningTime="2025-12-01 00:11:59.668063947 +0000 UTC m=+340.448833021" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.668137 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdf9f" podStartSLOduration=55.562603304 podStartE2EDuration="3m4.668135419s" podCreationTimestamp="2025-12-01 00:08:55 +0000 UTC" firstStartedPulling="2025-12-01 00:08:58.286856979 +0000 UTC m=+159.067626053" lastFinishedPulling="2025-12-01 00:11:07.392389094 +0000 UTC m=+288.173158168" observedRunningTime="2025-12-01 00:11:16.766480989 +0000 UTC m=+297.547250073" watchObservedRunningTime="2025-12-01 00:11:59.668135419 +0000 UTC m=+340.448904493" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.668938 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ct8wp" podStartSLOduration=49.911533822 podStartE2EDuration="3m6.668933514s" podCreationTimestamp="2025-12-01 00:08:53 +0000 UTC" firstStartedPulling="2025-12-01 00:08:56.271196617 +0000 UTC m=+157.051965691" lastFinishedPulling="2025-12-01 00:11:13.028596309 +0000 UTC m=+293.809365383" observedRunningTime="2025-12-01 00:11:16.832124786 +0000 UTC m=+297.612893860" watchObservedRunningTime="2025-12-01 00:11:59.668933514 +0000 UTC m=+340.449702588" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.669351 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.669400 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.673743 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.691077 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=31.691051393 podStartE2EDuration="31.691051393s" podCreationTimestamp="2025-12-01 00:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:11:59.684550495 +0000 UTC m=+340.465319589" watchObservedRunningTime="2025-12-01 00:11:59.691051393 +0000 UTC m=+340.471820467" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.709048 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.767439 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6rhj6_e31602df-d2bc-40de-93be-42600c22a9c1/marketplace-operator/1.log" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.860279 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.975145 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 01 00:11:59 crc kubenswrapper[4846]: I1201 00:11:59.988432 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 01 00:12:00 crc kubenswrapper[4846]: I1201 00:12:00.411975 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 01 00:12:00 crc kubenswrapper[4846]: I1201 00:12:00.450497 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 01 00:12:00 crc kubenswrapper[4846]: I1201 00:12:00.510590 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 01 00:12:00 crc kubenswrapper[4846]: I1201 00:12:00.750878 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.058291 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.142625 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.164596 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.178902 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.216573 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.263021 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.479322 4846 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.484468 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.739649 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 00:12:01 crc kubenswrapper[4846]: I1201 00:12:01.758497 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.017571 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.259123 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.272010 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.314571 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.373340 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.512655 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.590007 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.592290 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.624845 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 01 00:12:02 crc kubenswrapper[4846]: I1201 00:12:02.917819 4846 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.130640 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.401113 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.694730 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.733278 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.787759 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.797080 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.802505 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 01 00:12:03 crc kubenswrapper[4846]: I1201 00:12:03.954987 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.006998 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.007168 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.088922 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.118843 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.248271 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.375631 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.539949 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.597856 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.599981 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.610836 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.610894 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.618779 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.779588 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.798348 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.853944 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.884566 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.897893 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.915735 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 00:12:04 crc kubenswrapper[4846]: I1201 00:12:04.987353 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.058663 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.159276 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.176725 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.233539 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.347915 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.502252 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.836613 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 01 00:12:05 crc kubenswrapper[4846]: I1201 00:12:05.888778 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.050529 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.098706 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.196392 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.316643 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.337847 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.367695 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.367993 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.381555 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.434141 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.436970 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.546340 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.611269 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.640760 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.702221 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.771627 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.775767 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.796214 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.802658 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 00:12:06 crc kubenswrapper[4846]: I1201 00:12:06.899910 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.031006 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.086701 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.111754 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.246840 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.300939 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.336598 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.336719 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.337377 4846 scope.go:117] "RemoveContainer" containerID="062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018" Dec 01 00:12:07 crc kubenswrapper[4846]: E1201 00:12:07.337629 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6rhj6_openshift-marketplace(e31602df-d2bc-40de-93be-42600c22a9c1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.371751 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.413791 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.428268 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.527165 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 01 00:12:07 crc kubenswrapper[4846]: I1201 00:12:07.810010 4846 scope.go:117] "RemoveContainer" containerID="062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018" Dec 01 00:12:07 crc kubenswrapper[4846]: E1201 00:12:07.810337 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-6rhj6_openshift-marketplace(e31602df-d2bc-40de-93be-42600c22a9c1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" Dec 01 00:12:08 crc kubenswrapper[4846]: I1201 00:12:08.333959 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 01 00:12:08 crc kubenswrapper[4846]: I1201 00:12:08.424587 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 01 00:12:08 crc kubenswrapper[4846]: I1201 00:12:08.836788 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 01 00:12:09 crc kubenswrapper[4846]: I1201 00:12:09.125841 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 01 00:12:09 crc kubenswrapper[4846]: I1201 00:12:09.310942 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 01 00:12:09 crc kubenswrapper[4846]: I1201 00:12:09.367588 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 01 00:12:09 crc kubenswrapper[4846]: I1201 00:12:09.962487 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 01 00:12:09 crc kubenswrapper[4846]: I1201 00:12:09.970268 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.260775 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.290942 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.437671 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.470043 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.501674 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.806919 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 01 00:12:10 crc kubenswrapper[4846]: I1201 00:12:10.819451 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 01 00:12:11 crc kubenswrapper[4846]: I1201 00:12:11.002264 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 01 00:12:11 crc kubenswrapper[4846]: I1201 00:12:11.090894 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 01 00:12:11 crc kubenswrapper[4846]: I1201 00:12:11.241421 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 00:12:11 crc kubenswrapper[4846]: I1201 00:12:11.725799 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.249238 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.301030 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.366447 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.580934 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.591490 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.649018 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.726598 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.799856 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.902362 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.954801 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 01 00:12:12 crc kubenswrapper[4846]: I1201 00:12:12.978358 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.475815 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.562081 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.602267 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.669308 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.682147 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.846022 4846 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 00:12:13 crc kubenswrapper[4846]: I1201 00:12:13.847099 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ce9f894a4cc4f0c6545dd054e6ef6b93cdd5682e919e3b94dcbf356d3b93a8e3" gracePeriod=5 Dec 01 00:12:14 crc kubenswrapper[4846]: I1201 00:12:14.047917 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 00:12:14 crc kubenswrapper[4846]: I1201 00:12:14.490423 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 00:12:14 crc kubenswrapper[4846]: I1201 00:12:14.741566 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 01 00:12:14 crc kubenswrapper[4846]: I1201 00:12:14.787299 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 01 00:12:14 crc kubenswrapper[4846]: I1201 00:12:14.868485 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.089045 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.105466 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.224601 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.411472 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.474749 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.682923 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.742629 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.873937 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 01 00:12:15 crc kubenswrapper[4846]: I1201 00:12:15.983476 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 01 00:12:16 crc kubenswrapper[4846]: I1201 00:12:16.073131 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 01 00:12:16 crc kubenswrapper[4846]: I1201 00:12:16.108671 4846 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 01 00:12:16 crc kubenswrapper[4846]: I1201 00:12:16.287188 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 01 00:12:16 crc kubenswrapper[4846]: I1201 00:12:16.325335 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 01 00:12:16 crc kubenswrapper[4846]: I1201 00:12:16.634327 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 01 00:12:17 crc kubenswrapper[4846]: I1201 00:12:17.061175 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 01 00:12:17 crc kubenswrapper[4846]: I1201 00:12:17.156764 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 01 00:12:17 crc kubenswrapper[4846]: I1201 00:12:17.409251 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 01 00:12:17 crc kubenswrapper[4846]: I1201 00:12:17.777004 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 01 00:12:18 crc kubenswrapper[4846]: I1201 00:12:18.382464 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 01 00:12:18 crc kubenswrapper[4846]: I1201 00:12:18.742126 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 01 00:12:18 crc kubenswrapper[4846]: I1201 00:12:18.757939 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 01 00:12:19 crc kubenswrapper[4846]: I1201 00:12:19.098030 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 01 00:12:19 crc kubenswrapper[4846]: I1201 00:12:19.133451 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 01 00:12:19 crc kubenswrapper[4846]: I1201 00:12:19.207144 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 01 00:12:19 crc kubenswrapper[4846]: I1201 00:12:19.310174 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 01 00:12:19 crc kubenswrapper[4846]: I1201 00:12:19.363015 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 00:12:19 crc kubenswrapper[4846]: I1201 00:12:19.959618 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 00:12:20 crc kubenswrapper[4846]: I1201 00:12:20.416799 4846 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 01 00:12:20 crc kubenswrapper[4846]: I1201 00:12:20.580351 4846 scope.go:117] "RemoveContainer" containerID="062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018" Dec 01 00:12:20 crc kubenswrapper[4846]: I1201 00:12:20.918349 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 00:12:20 crc kubenswrapper[4846]: I1201 00:12:20.918822 4846 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ce9f894a4cc4f0c6545dd054e6ef6b93cdd5682e919e3b94dcbf356d3b93a8e3" exitCode=137 Dec 01 00:12:21 crc kubenswrapper[4846]: I1201 00:12:21.655992 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 00:12:21 crc kubenswrapper[4846]: I1201 00:12:21.742788 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 01 00:12:22 crc kubenswrapper[4846]: I1201 00:12:22.694321 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 01 00:12:22 crc kubenswrapper[4846]: I1201 00:12:22.932443 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6rhj6_e31602df-d2bc-40de-93be-42600c22a9c1/marketplace-operator/1.log" Dec 01 00:12:22 crc kubenswrapper[4846]: I1201 00:12:22.932525 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerStarted","Data":"fb256a79867e152329e636d00b10b6928740a801fd49f3320e5d2338f97c013f"} Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.156927 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.157018 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299396 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299508 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299578 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299625 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299669 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299759 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299836 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299877 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.299932 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.300293 4846 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.300323 4846 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.300339 4846 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.300353 4846 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.312172 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.401563 4846 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.592342 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.592851 4846 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.609286 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.609351 4846 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="532193df-f525-41d8-9f7c-26052a245f6e" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.616221 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.616294 4846 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="532193df-f525-41d8-9f7c-26052a245f6e" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.943634 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.943847 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.943861 4846 scope.go:117] "RemoveContainer" containerID="ce9f894a4cc4f0c6545dd054e6ef6b93cdd5682e919e3b94dcbf356d3b93a8e3" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.944049 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:12:23 crc kubenswrapper[4846]: I1201 00:12:23.947941 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:12:25 crc kubenswrapper[4846]: I1201 00:12:25.419404 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:12:25 crc kubenswrapper[4846]: I1201 00:12:25.419500 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:12:54 crc kubenswrapper[4846]: I1201 00:12:54.001340 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hbmbg"] Dec 01 00:12:54 crc kubenswrapper[4846]: I1201 00:12:54.003644 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" podUID="3c17b590-902c-4863-823c-865652c475c0" containerName="controller-manager" containerID="cri-o://72a65eb362cf2a6c516ecb1cc702f1d5f393de34d6ed6b59352b4ae4a24425bb" gracePeriod=30 Dec 01 00:12:54 crc kubenswrapper[4846]: I1201 00:12:54.119643 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q"] Dec 01 00:12:54 crc kubenswrapper[4846]: I1201 00:12:54.119937 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" podUID="2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" containerName="route-controller-manager" containerID="cri-o://f7be5448c001f0b68e800dac504e0ee6851d21447f2919b418898317b660548c" gracePeriod=30 Dec 01 00:12:55 crc kubenswrapper[4846]: I1201 00:12:55.155926 4846 generic.go:334] "Generic (PLEG): container finished" podID="3c17b590-902c-4863-823c-865652c475c0" containerID="72a65eb362cf2a6c516ecb1cc702f1d5f393de34d6ed6b59352b4ae4a24425bb" exitCode=0 Dec 01 00:12:55 crc kubenswrapper[4846]: I1201 00:12:55.156041 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" event={"ID":"3c17b590-902c-4863-823c-865652c475c0","Type":"ContainerDied","Data":"72a65eb362cf2a6c516ecb1cc702f1d5f393de34d6ed6b59352b4ae4a24425bb"} Dec 01 00:12:55 crc kubenswrapper[4846]: I1201 00:12:55.419381 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:12:55 crc kubenswrapper[4846]: I1201 00:12:55.419446 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:12:55 crc kubenswrapper[4846]: I1201 00:12:55.980167 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.111605 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c17b590-902c-4863-823c-865652c475c0-serving-cert\") pod \"3c17b590-902c-4863-823c-865652c475c0\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.112076 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg5sc\" (UniqueName: \"kubernetes.io/projected/3c17b590-902c-4863-823c-865652c475c0-kube-api-access-gg5sc\") pod \"3c17b590-902c-4863-823c-865652c475c0\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.112142 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-config\") pod \"3c17b590-902c-4863-823c-865652c475c0\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.112231 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-client-ca\") pod \"3c17b590-902c-4863-823c-865652c475c0\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.112255 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-proxy-ca-bundles\") pod \"3c17b590-902c-4863-823c-865652c475c0\" (UID: \"3c17b590-902c-4863-823c-865652c475c0\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.113329 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3c17b590-902c-4863-823c-865652c475c0" (UID: "3c17b590-902c-4863-823c-865652c475c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.113444 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "3c17b590-902c-4863-823c-865652c475c0" (UID: "3c17b590-902c-4863-823c-865652c475c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.113527 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-config" (OuterVolumeSpecName: "config") pod "3c17b590-902c-4863-823c-865652c475c0" (UID: "3c17b590-902c-4863-823c-865652c475c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.118098 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c17b590-902c-4863-823c-865652c475c0-kube-api-access-gg5sc" (OuterVolumeSpecName: "kube-api-access-gg5sc") pod "3c17b590-902c-4863-823c-865652c475c0" (UID: "3c17b590-902c-4863-823c-865652c475c0"). InnerVolumeSpecName "kube-api-access-gg5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.121300 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c17b590-902c-4863-823c-865652c475c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3c17b590-902c-4863-823c-865652c475c0" (UID: "3c17b590-902c-4863-823c-865652c475c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.167419 4846 generic.go:334] "Generic (PLEG): container finished" podID="2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" containerID="f7be5448c001f0b68e800dac504e0ee6851d21447f2919b418898317b660548c" exitCode=0 Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.167543 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" event={"ID":"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136","Type":"ContainerDied","Data":"f7be5448c001f0b68e800dac504e0ee6851d21447f2919b418898317b660548c"} Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.169960 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" event={"ID":"3c17b590-902c-4863-823c-865652c475c0","Type":"ContainerDied","Data":"4f50c55e103bb39a94a2820d57060dc4b8d4a46ec7c60f130f0a11e9fdf2eae9"} Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.170060 4846 scope.go:117] "RemoveContainer" containerID="72a65eb362cf2a6c516ecb1cc702f1d5f393de34d6ed6b59352b4ae4a24425bb" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.170363 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hbmbg" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.212121 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hbmbg"] Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.213390 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.213428 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.213443 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3c17b590-902c-4863-823c-865652c475c0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.213458 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c17b590-902c-4863-823c-865652c475c0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.213471 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg5sc\" (UniqueName: \"kubernetes.io/projected/3c17b590-902c-4863-823c-865652c475c0-kube-api-access-gg5sc\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.215222 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hbmbg"] Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.442160 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.519762 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpgdk\" (UniqueName: \"kubernetes.io/projected/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-kube-api-access-vpgdk\") pod \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.519884 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-serving-cert\") pod \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.519924 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-config\") pod \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.520012 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-client-ca\") pod \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\" (UID: \"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136\") " Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.520841 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-client-ca" (OuterVolumeSpecName: "client-ca") pod "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" (UID: "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.520943 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-config" (OuterVolumeSpecName: "config") pod "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" (UID: "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.523418 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" (UID: "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.524325 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-kube-api-access-vpgdk" (OuterVolumeSpecName: "kube-api-access-vpgdk") pod "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" (UID: "2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136"). InnerVolumeSpecName "kube-api-access-vpgdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.622037 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.622079 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.622095 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:56 crc kubenswrapper[4846]: I1201 00:12:56.622106 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpgdk\" (UniqueName: \"kubernetes.io/projected/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136-kube-api-access-vpgdk\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.004530 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v"] Dec 01 00:12:57 crc kubenswrapper[4846]: E1201 00:12:57.004986 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005016 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 00:12:57 crc kubenswrapper[4846]: E1201 00:12:57.005037 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c17b590-902c-4863-823c-865652c475c0" containerName="controller-manager" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005047 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c17b590-902c-4863-823c-865652c475c0" containerName="controller-manager" Dec 01 00:12:57 crc kubenswrapper[4846]: E1201 00:12:57.005062 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" containerName="installer" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005072 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" containerName="installer" Dec 01 00:12:57 crc kubenswrapper[4846]: E1201 00:12:57.005088 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" containerName="route-controller-manager" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005097 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" containerName="route-controller-manager" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005240 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6fd7f6-5551-4b1a-b743-84778b664a26" containerName="installer" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005261 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c17b590-902c-4863-823c-865652c475c0" containerName="controller-manager" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005271 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" containerName="route-controller-manager" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005283 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.005893 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.016749 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-hgnlb"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.017640 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.019855 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.023355 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.023569 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.023821 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.023947 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.024104 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.024263 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.027026 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238c140c-c2c4-402e-8dea-a4af1177d6f8-serving-cert\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.027106 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-client-ca\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.028078 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-hgnlb"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.042107 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.042386 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-config\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.042423 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9z8n\" (UniqueName: \"kubernetes.io/projected/b5229137-c8b0-4cee-992f-6e6a75f8e84d-kube-api-access-d9z8n\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.043179 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-config\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.043220 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prsm\" (UniqueName: \"kubernetes.io/projected/238c140c-c2c4-402e-8dea-a4af1177d6f8-kube-api-access-8prsm\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.043289 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-client-ca\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.043319 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5229137-c8b0-4cee-992f-6e6a75f8e84d-serving-cert\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.043447 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144290 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8prsm\" (UniqueName: \"kubernetes.io/projected/238c140c-c2c4-402e-8dea-a4af1177d6f8-kube-api-access-8prsm\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144568 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-client-ca\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144597 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5229137-c8b0-4cee-992f-6e6a75f8e84d-serving-cert\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144635 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144667 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238c140c-c2c4-402e-8dea-a4af1177d6f8-serving-cert\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144705 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-client-ca\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144723 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-config\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144756 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9z8n\" (UniqueName: \"kubernetes.io/projected/b5229137-c8b0-4cee-992f-6e6a75f8e84d-kube-api-access-d9z8n\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.144784 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-config\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.145611 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-client-ca\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.145992 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-proxy-ca-bundles\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.146185 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-config\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.146259 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-client-ca\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.147854 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-config\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.150372 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238c140c-c2c4-402e-8dea-a4af1177d6f8-serving-cert\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.151277 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5229137-c8b0-4cee-992f-6e6a75f8e84d-serving-cert\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.161276 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8prsm\" (UniqueName: \"kubernetes.io/projected/238c140c-c2c4-402e-8dea-a4af1177d6f8-kube-api-access-8prsm\") pod \"route-controller-manager-7c7f6d8788-bf67v\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.165745 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9z8n\" (UniqueName: \"kubernetes.io/projected/b5229137-c8b0-4cee-992f-6e6a75f8e84d-kube-api-access-d9z8n\") pod \"controller-manager-74577df4c5-hgnlb\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.181539 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" event={"ID":"2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136","Type":"ContainerDied","Data":"2cc941032f0e90430de5ce4b582ee9ab2c9eb418a09ad3316f47549a162491a1"} Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.181590 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.181624 4846 scope.go:117] "RemoveContainer" containerID="f7be5448c001f0b68e800dac504e0ee6851d21447f2919b418898317b660548c" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.228489 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.233550 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pzb9q"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.331536 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.346657 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.389634 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-hgnlb"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.402455 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.588606 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136" path="/var/lib/kubelet/pods/2eb1fb10-623a-4f1d-b2fa-d06e9ac2c136/volumes" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.589646 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c17b590-902c-4863-823c-865652c475c0" path="/var/lib/kubelet/pods/3c17b590-902c-4863-823c-865652c475c0/volumes" Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.620295 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-hgnlb"] Dec 01 00:12:57 crc kubenswrapper[4846]: I1201 00:12:57.782696 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v"] Dec 01 00:12:57 crc kubenswrapper[4846]: W1201 00:12:57.788532 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod238c140c_c2c4_402e_8dea_a4af1177d6f8.slice/crio-8ddbb3a496d7078ff7ca4cb186aca2f0e5744e2487411094c50f4acb9cedee43 WatchSource:0}: Error finding container 8ddbb3a496d7078ff7ca4cb186aca2f0e5744e2487411094c50f4acb9cedee43: Status 404 returned error can't find the container with id 8ddbb3a496d7078ff7ca4cb186aca2f0e5744e2487411094c50f4acb9cedee43 Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.193425 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" event={"ID":"b5229137-c8b0-4cee-992f-6e6a75f8e84d","Type":"ContainerStarted","Data":"1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39"} Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.193766 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" event={"ID":"b5229137-c8b0-4cee-992f-6e6a75f8e84d","Type":"ContainerStarted","Data":"cdd8f80dd253e7acd520fbf128e9d99e93a7d1adcca6f66afb0e40540904a4c3"} Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.193496 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" podUID="b5229137-c8b0-4cee-992f-6e6a75f8e84d" containerName="controller-manager" containerID="cri-o://1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39" gracePeriod=30 Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.193854 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.195299 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" event={"ID":"238c140c-c2c4-402e-8dea-a4af1177d6f8","Type":"ContainerStarted","Data":"abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0"} Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.195335 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" event={"ID":"238c140c-c2c4-402e-8dea-a4af1177d6f8","Type":"ContainerStarted","Data":"8ddbb3a496d7078ff7ca4cb186aca2f0e5744e2487411094c50f4acb9cedee43"} Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.205596 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.223815 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" podStartSLOduration=3.223799567 podStartE2EDuration="3.223799567s" podCreationTimestamp="2025-12-01 00:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:12:58.218333611 +0000 UTC m=+398.999102725" watchObservedRunningTime="2025-12-01 00:12:58.223799567 +0000 UTC m=+399.004568641" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.566529 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.655872 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-694644c558-4c7m2"] Dec 01 00:12:58 crc kubenswrapper[4846]: E1201 00:12:58.656198 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5229137-c8b0-4cee-992f-6e6a75f8e84d" containerName="controller-manager" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.656219 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5229137-c8b0-4cee-992f-6e6a75f8e84d" containerName="controller-manager" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.656373 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5229137-c8b0-4cee-992f-6e6a75f8e84d" containerName="controller-manager" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.656954 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.672583 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-proxy-ca-bundles\") pod \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.672648 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9z8n\" (UniqueName: \"kubernetes.io/projected/b5229137-c8b0-4cee-992f-6e6a75f8e84d-kube-api-access-d9z8n\") pod \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.672698 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5229137-c8b0-4cee-992f-6e6a75f8e84d-serving-cert\") pod \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.672733 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-config\") pod \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.672757 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-client-ca\") pod \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\" (UID: \"b5229137-c8b0-4cee-992f-6e6a75f8e84d\") " Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.673466 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b5229137-c8b0-4cee-992f-6e6a75f8e84d" (UID: "b5229137-c8b0-4cee-992f-6e6a75f8e84d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.674142 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-config" (OuterVolumeSpecName: "config") pod "b5229137-c8b0-4cee-992f-6e6a75f8e84d" (UID: "b5229137-c8b0-4cee-992f-6e6a75f8e84d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.674163 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5229137-c8b0-4cee-992f-6e6a75f8e84d" (UID: "b5229137-c8b0-4cee-992f-6e6a75f8e84d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.702864 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5229137-c8b0-4cee-992f-6e6a75f8e84d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5229137-c8b0-4cee-992f-6e6a75f8e84d" (UID: "b5229137-c8b0-4cee-992f-6e6a75f8e84d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.713510 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-694644c558-4c7m2"] Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774158 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-proxy-ca-bundles\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774212 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-client-ca\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774233 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-config\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774267 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knt6j\" (UniqueName: \"kubernetes.io/projected/a5748c7d-abab-4eb8-b13a-25332a10b39f-kube-api-access-knt6j\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774285 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5748c7d-abab-4eb8-b13a-25332a10b39f-serving-cert\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774333 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774347 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5229137-c8b0-4cee-992f-6e6a75f8e84d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774356 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.774364 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5229137-c8b0-4cee-992f-6e6a75f8e84d-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.803994 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5229137-c8b0-4cee-992f-6e6a75f8e84d-kube-api-access-d9z8n" (OuterVolumeSpecName: "kube-api-access-d9z8n") pod "b5229137-c8b0-4cee-992f-6e6a75f8e84d" (UID: "b5229137-c8b0-4cee-992f-6e6a75f8e84d"). InnerVolumeSpecName "kube-api-access-d9z8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.876118 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-proxy-ca-bundles\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.876195 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-client-ca\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.876229 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-config\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.876327 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knt6j\" (UniqueName: \"kubernetes.io/projected/a5748c7d-abab-4eb8-b13a-25332a10b39f-kube-api-access-knt6j\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.876358 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5748c7d-abab-4eb8-b13a-25332a10b39f-serving-cert\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.876444 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9z8n\" (UniqueName: \"kubernetes.io/projected/b5229137-c8b0-4cee-992f-6e6a75f8e84d-kube-api-access-d9z8n\") on node \"crc\" DevicePath \"\"" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.877606 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-proxy-ca-bundles\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.877711 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-client-ca\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.878359 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-config\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.880023 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5748c7d-abab-4eb8-b13a-25332a10b39f-serving-cert\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:58 crc kubenswrapper[4846]: I1201 00:12:58.895140 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knt6j\" (UniqueName: \"kubernetes.io/projected/a5748c7d-abab-4eb8-b13a-25332a10b39f-kube-api-access-knt6j\") pod \"controller-manager-694644c558-4c7m2\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.057328 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.202282 4846 generic.go:334] "Generic (PLEG): container finished" podID="b5229137-c8b0-4cee-992f-6e6a75f8e84d" containerID="1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39" exitCode=0 Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.202491 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" podUID="238c140c-c2c4-402e-8dea-a4af1177d6f8" containerName="route-controller-manager" containerID="cri-o://abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0" gracePeriod=30 Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.202965 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.210768 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" event={"ID":"b5229137-c8b0-4cee-992f-6e6a75f8e84d","Type":"ContainerDied","Data":"1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39"} Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.210826 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74577df4c5-hgnlb" event={"ID":"b5229137-c8b0-4cee-992f-6e6a75f8e84d","Type":"ContainerDied","Data":"cdd8f80dd253e7acd520fbf128e9d99e93a7d1adcca6f66afb0e40540904a4c3"} Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.210856 4846 scope.go:117] "RemoveContainer" containerID="1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.211037 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.216574 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.233806 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" podStartSLOduration=4.2337892440000005 podStartE2EDuration="4.233789244s" podCreationTimestamp="2025-12-01 00:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:12:59.231289064 +0000 UTC m=+400.012058148" watchObservedRunningTime="2025-12-01 00:12:59.233789244 +0000 UTC m=+400.014558318" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.241499 4846 scope.go:117] "RemoveContainer" containerID="1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39" Dec 01 00:12:59 crc kubenswrapper[4846]: E1201 00:12:59.241993 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39\": container with ID starting with 1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39 not found: ID does not exist" containerID="1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.242054 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39"} err="failed to get container status \"1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39\": rpc error: code = NotFound desc = could not find container \"1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39\": container with ID starting with 1587dbc7765e88090fa78b5cc155b2285ffa69a58fb0ac0fcdee063c669f2b39 not found: ID does not exist" Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.261400 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-hgnlb"] Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.266152 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74577df4c5-hgnlb"] Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.508107 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-694644c558-4c7m2"] Dec 01 00:12:59 crc kubenswrapper[4846]: I1201 00:12:59.589778 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5229137-c8b0-4cee-992f-6e6a75f8e84d" path="/var/lib/kubelet/pods/b5229137-c8b0-4cee-992f-6e6a75f8e84d/volumes" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.063251 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.098988 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8prsm\" (UniqueName: \"kubernetes.io/projected/238c140c-c2c4-402e-8dea-a4af1177d6f8-kube-api-access-8prsm\") pod \"238c140c-c2c4-402e-8dea-a4af1177d6f8\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.099068 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238c140c-c2c4-402e-8dea-a4af1177d6f8-serving-cert\") pod \"238c140c-c2c4-402e-8dea-a4af1177d6f8\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.099194 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-client-ca\") pod \"238c140c-c2c4-402e-8dea-a4af1177d6f8\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.099225 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-config\") pod \"238c140c-c2c4-402e-8dea-a4af1177d6f8\" (UID: \"238c140c-c2c4-402e-8dea-a4af1177d6f8\") " Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.100146 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-config" (OuterVolumeSpecName: "config") pod "238c140c-c2c4-402e-8dea-a4af1177d6f8" (UID: "238c140c-c2c4-402e-8dea-a4af1177d6f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.101205 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-client-ca" (OuterVolumeSpecName: "client-ca") pod "238c140c-c2c4-402e-8dea-a4af1177d6f8" (UID: "238c140c-c2c4-402e-8dea-a4af1177d6f8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.113853 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/238c140c-c2c4-402e-8dea-a4af1177d6f8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "238c140c-c2c4-402e-8dea-a4af1177d6f8" (UID: "238c140c-c2c4-402e-8dea-a4af1177d6f8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.113910 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/238c140c-c2c4-402e-8dea-a4af1177d6f8-kube-api-access-8prsm" (OuterVolumeSpecName: "kube-api-access-8prsm") pod "238c140c-c2c4-402e-8dea-a4af1177d6f8" (UID: "238c140c-c2c4-402e-8dea-a4af1177d6f8"). InnerVolumeSpecName "kube-api-access-8prsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.289382 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.289429 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238c140c-c2c4-402e-8dea-a4af1177d6f8-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.289442 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8prsm\" (UniqueName: \"kubernetes.io/projected/238c140c-c2c4-402e-8dea-a4af1177d6f8-kube-api-access-8prsm\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.289456 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/238c140c-c2c4-402e-8dea-a4af1177d6f8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.295318 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" event={"ID":"a5748c7d-abab-4eb8-b13a-25332a10b39f","Type":"ContainerStarted","Data":"b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77"} Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.295968 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.296018 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" event={"ID":"a5748c7d-abab-4eb8-b13a-25332a10b39f","Type":"ContainerStarted","Data":"bd94298550ccdc2ab62ad164db67b3ca41249659da2692ef67124a623df76e11"} Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.297065 4846 generic.go:334] "Generic (PLEG): container finished" podID="238c140c-c2c4-402e-8dea-a4af1177d6f8" containerID="abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0" exitCode=0 Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.297148 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" event={"ID":"238c140c-c2c4-402e-8dea-a4af1177d6f8","Type":"ContainerDied","Data":"abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0"} Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.297174 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" event={"ID":"238c140c-c2c4-402e-8dea-a4af1177d6f8","Type":"ContainerDied","Data":"8ddbb3a496d7078ff7ca4cb186aca2f0e5744e2487411094c50f4acb9cedee43"} Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.297198 4846 scope.go:117] "RemoveContainer" containerID="abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.297302 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.321229 4846 scope.go:117] "RemoveContainer" containerID="abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0" Dec 01 00:13:00 crc kubenswrapper[4846]: E1201 00:13:00.321810 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0\": container with ID starting with abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0 not found: ID does not exist" containerID="abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.321858 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0"} err="failed to get container status \"abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0\": rpc error: code = NotFound desc = could not find container \"abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0\": container with ID starting with abfda1ceb267f148ffec2bb2c673c996a493f0e472a0a3e462de9d652b3715f0 not found: ID does not exist" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.326499 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.328643 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" podStartSLOduration=3.32862384 podStartE2EDuration="3.32862384s" podCreationTimestamp="2025-12-01 00:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:13:00.327547705 +0000 UTC m=+401.108316789" watchObservedRunningTime="2025-12-01 00:13:00.32862384 +0000 UTC m=+401.109392934" Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.431793 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v"] Dec 01 00:13:00 crc kubenswrapper[4846]: I1201 00:13:00.436518 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-bf67v"] Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.005729 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57"] Dec 01 00:13:01 crc kubenswrapper[4846]: E1201 00:13:01.006041 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238c140c-c2c4-402e-8dea-a4af1177d6f8" containerName="route-controller-manager" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.006059 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="238c140c-c2c4-402e-8dea-a4af1177d6f8" containerName="route-controller-manager" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.006187 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="238c140c-c2c4-402e-8dea-a4af1177d6f8" containerName="route-controller-manager" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.006762 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.009122 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.009373 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.009385 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.009571 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.009575 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.011152 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.038512 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57"] Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.099384 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-config\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.099450 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt6hw\" (UniqueName: \"kubernetes.io/projected/870b8241-4d29-4b15-965a-41d992b0ed38-kube-api-access-zt6hw\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.099594 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/870b8241-4d29-4b15-965a-41d992b0ed38-serving-cert\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.099655 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-client-ca\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.200526 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/870b8241-4d29-4b15-965a-41d992b0ed38-serving-cert\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.200592 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-client-ca\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.200664 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-config\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.200714 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt6hw\" (UniqueName: \"kubernetes.io/projected/870b8241-4d29-4b15-965a-41d992b0ed38-kube-api-access-zt6hw\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.201881 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-client-ca\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.202315 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-config\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.210557 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/870b8241-4d29-4b15-965a-41d992b0ed38-serving-cert\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.218138 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt6hw\" (UniqueName: \"kubernetes.io/projected/870b8241-4d29-4b15-965a-41d992b0ed38-kube-api-access-zt6hw\") pod \"route-controller-manager-bdfb75977-7zv57\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.322775 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.592882 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="238c140c-c2c4-402e-8dea-a4af1177d6f8" path="/var/lib/kubelet/pods/238c140c-c2c4-402e-8dea-a4af1177d6f8/volumes" Dec 01 00:13:01 crc kubenswrapper[4846]: I1201 00:13:01.729727 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57"] Dec 01 00:13:01 crc kubenswrapper[4846]: W1201 00:13:01.741836 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod870b8241_4d29_4b15_965a_41d992b0ed38.slice/crio-a83de0d77922b880061c62aee1321a7dbcb45bffd309c663639c191bbe19f043 WatchSource:0}: Error finding container a83de0d77922b880061c62aee1321a7dbcb45bffd309c663639c191bbe19f043: Status 404 returned error can't find the container with id a83de0d77922b880061c62aee1321a7dbcb45bffd309c663639c191bbe19f043 Dec 01 00:13:02 crc kubenswrapper[4846]: I1201 00:13:02.322352 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" event={"ID":"870b8241-4d29-4b15-965a-41d992b0ed38","Type":"ContainerStarted","Data":"a83de0d77922b880061c62aee1321a7dbcb45bffd309c663639c191bbe19f043"} Dec 01 00:13:03 crc kubenswrapper[4846]: I1201 00:13:03.331099 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" event={"ID":"870b8241-4d29-4b15-965a-41d992b0ed38","Type":"ContainerStarted","Data":"7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce"} Dec 01 00:13:03 crc kubenswrapper[4846]: I1201 00:13:03.331465 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:03 crc kubenswrapper[4846]: I1201 00:13:03.354467 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" podStartSLOduration=6.354434797 podStartE2EDuration="6.354434797s" podCreationTimestamp="2025-12-01 00:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:13:03.34950135 +0000 UTC m=+404.130270464" watchObservedRunningTime="2025-12-01 00:13:03.354434797 +0000 UTC m=+404.135203871" Dec 01 00:13:03 crc kubenswrapper[4846]: I1201 00:13:03.434041 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:17 crc kubenswrapper[4846]: I1201 00:13:17.009931 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ct8wp"] Dec 01 00:13:17 crc kubenswrapper[4846]: I1201 00:13:17.012423 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ct8wp" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="registry-server" containerID="cri-o://1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053" gracePeriod=2 Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.285143 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.408121 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44ddl"] Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.408388 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-44ddl" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="registry-server" containerID="cri-o://4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093" gracePeriod=2 Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.434179 4846 generic.go:334] "Generic (PLEG): container finished" podID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerID="1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053" exitCode=0 Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.434456 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct8wp" event={"ID":"26dc41c0-b440-494e-9bfa-2a70f3e16040","Type":"ContainerDied","Data":"1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053"} Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.434607 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ct8wp" event={"ID":"26dc41c0-b440-494e-9bfa-2a70f3e16040","Type":"ContainerDied","Data":"58b5abd201a345d069a48a0ab01b9c2e8509ea4e7251798bfadf6186fc04a709"} Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.434473 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ct8wp" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.434730 4846 scope.go:117] "RemoveContainer" containerID="1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.461342 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc4kc\" (UniqueName: \"kubernetes.io/projected/26dc41c0-b440-494e-9bfa-2a70f3e16040-kube-api-access-kc4kc\") pod \"26dc41c0-b440-494e-9bfa-2a70f3e16040\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.461830 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-catalog-content\") pod \"26dc41c0-b440-494e-9bfa-2a70f3e16040\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.461899 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-utilities\") pod \"26dc41c0-b440-494e-9bfa-2a70f3e16040\" (UID: \"26dc41c0-b440-494e-9bfa-2a70f3e16040\") " Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.462938 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-utilities" (OuterVolumeSpecName: "utilities") pod "26dc41c0-b440-494e-9bfa-2a70f3e16040" (UID: "26dc41c0-b440-494e-9bfa-2a70f3e16040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.464439 4846 scope.go:117] "RemoveContainer" containerID="41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.470340 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dc41c0-b440-494e-9bfa-2a70f3e16040-kube-api-access-kc4kc" (OuterVolumeSpecName: "kube-api-access-kc4kc") pod "26dc41c0-b440-494e-9bfa-2a70f3e16040" (UID: "26dc41c0-b440-494e-9bfa-2a70f3e16040"). InnerVolumeSpecName "kube-api-access-kc4kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.508259 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26dc41c0-b440-494e-9bfa-2a70f3e16040" (UID: "26dc41c0-b440-494e-9bfa-2a70f3e16040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.557007 4846 scope.go:117] "RemoveContainer" containerID="93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.563324 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.563358 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26dc41c0-b440-494e-9bfa-2a70f3e16040-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.563369 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc4kc\" (UniqueName: \"kubernetes.io/projected/26dc41c0-b440-494e-9bfa-2a70f3e16040-kube-api-access-kc4kc\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.576003 4846 scope.go:117] "RemoveContainer" containerID="1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053" Dec 01 00:13:19 crc kubenswrapper[4846]: E1201 00:13:19.576697 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053\": container with ID starting with 1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053 not found: ID does not exist" containerID="1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.576762 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053"} err="failed to get container status \"1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053\": rpc error: code = NotFound desc = could not find container \"1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053\": container with ID starting with 1a0f04d2436f438babdcf278c2071fe1d0a9b617d2009af119a2b2ba181aa053 not found: ID does not exist" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.576802 4846 scope.go:117] "RemoveContainer" containerID="41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162" Dec 01 00:13:19 crc kubenswrapper[4846]: E1201 00:13:19.577388 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162\": container with ID starting with 41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162 not found: ID does not exist" containerID="41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.577458 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162"} err="failed to get container status \"41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162\": rpc error: code = NotFound desc = could not find container \"41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162\": container with ID starting with 41b30ec17b951ad2dbad89aec7fd0b667f49c2eb6a386b318e3d087b8e000162 not found: ID does not exist" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.577496 4846 scope.go:117] "RemoveContainer" containerID="93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8" Dec 01 00:13:19 crc kubenswrapper[4846]: E1201 00:13:19.578912 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8\": container with ID starting with 93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8 not found: ID does not exist" containerID="93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.578982 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8"} err="failed to get container status \"93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8\": rpc error: code = NotFound desc = could not find container \"93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8\": container with ID starting with 93566415ae9ba044a261a28ed3a6133429981a659b4650846be50c917da8e6f8 not found: ID does not exist" Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.768011 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ct8wp"] Dec 01 00:13:19 crc kubenswrapper[4846]: I1201 00:13:19.779959 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ct8wp"] Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.003914 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.176058 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-utilities\") pod \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.176137 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-catalog-content\") pod \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.176271 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsc66\" (UniqueName: \"kubernetes.io/projected/ba05fe92-6ff2-4f5e-9f60-2948afa23445-kube-api-access-fsc66\") pod \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\" (UID: \"ba05fe92-6ff2-4f5e-9f60-2948afa23445\") " Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.177162 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-utilities" (OuterVolumeSpecName: "utilities") pod "ba05fe92-6ff2-4f5e-9f60-2948afa23445" (UID: "ba05fe92-6ff2-4f5e-9f60-2948afa23445"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.180485 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba05fe92-6ff2-4f5e-9f60-2948afa23445-kube-api-access-fsc66" (OuterVolumeSpecName: "kube-api-access-fsc66") pod "ba05fe92-6ff2-4f5e-9f60-2948afa23445" (UID: "ba05fe92-6ff2-4f5e-9f60-2948afa23445"). InnerVolumeSpecName "kube-api-access-fsc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.201658 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba05fe92-6ff2-4f5e-9f60-2948afa23445" (UID: "ba05fe92-6ff2-4f5e-9f60-2948afa23445"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.278096 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsc66\" (UniqueName: \"kubernetes.io/projected/ba05fe92-6ff2-4f5e-9f60-2948afa23445-kube-api-access-fsc66\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.278143 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.278160 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba05fe92-6ff2-4f5e-9f60-2948afa23445-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.443831 4846 generic.go:334] "Generic (PLEG): container finished" podID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerID="4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093" exitCode=0 Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.443888 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerDied","Data":"4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093"} Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.443918 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44ddl" event={"ID":"ba05fe92-6ff2-4f5e-9f60-2948afa23445","Type":"ContainerDied","Data":"5b8561274f57734803139c634b1f182ce95cd2a93737a991466d039924dda4dd"} Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.443967 4846 scope.go:117] "RemoveContainer" containerID="4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.444101 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44ddl" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.458651 4846 scope.go:117] "RemoveContainer" containerID="9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.478823 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44ddl"] Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.480975 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-44ddl"] Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.482559 4846 scope.go:117] "RemoveContainer" containerID="130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.498781 4846 scope.go:117] "RemoveContainer" containerID="4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093" Dec 01 00:13:20 crc kubenswrapper[4846]: E1201 00:13:20.499184 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093\": container with ID starting with 4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093 not found: ID does not exist" containerID="4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.499219 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093"} err="failed to get container status \"4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093\": rpc error: code = NotFound desc = could not find container \"4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093\": container with ID starting with 4e4ee087132148ec221da4835d5596180127927fb0e37694ed6e2b8e6f482093 not found: ID does not exist" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.499258 4846 scope.go:117] "RemoveContainer" containerID="9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc" Dec 01 00:13:20 crc kubenswrapper[4846]: E1201 00:13:20.499484 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc\": container with ID starting with 9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc not found: ID does not exist" containerID="9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.499512 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc"} err="failed to get container status \"9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc\": rpc error: code = NotFound desc = could not find container \"9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc\": container with ID starting with 9fae536f07ae9b7c256cca5ded6e819454198d38b2c9a87f7ff37fb010310bfc not found: ID does not exist" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.499532 4846 scope.go:117] "RemoveContainer" containerID="130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c" Dec 01 00:13:20 crc kubenswrapper[4846]: E1201 00:13:20.499781 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c\": container with ID starting with 130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c not found: ID does not exist" containerID="130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c" Dec 01 00:13:20 crc kubenswrapper[4846]: I1201 00:13:20.499808 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c"} err="failed to get container status \"130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c\": rpc error: code = NotFound desc = could not find container \"130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c\": container with ID starting with 130f66e0e204ebc1448aaa4a4afab40e2390001821b4b94dbba1d3250ed3f59c not found: ID does not exist" Dec 01 00:13:21 crc kubenswrapper[4846]: I1201 00:13:21.591188 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" path="/var/lib/kubelet/pods/26dc41c0-b440-494e-9bfa-2a70f3e16040/volumes" Dec 01 00:13:21 crc kubenswrapper[4846]: I1201 00:13:21.592936 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" path="/var/lib/kubelet/pods/ba05fe92-6ff2-4f5e-9f60-2948afa23445/volumes" Dec 01 00:13:25 crc kubenswrapper[4846]: I1201 00:13:25.419593 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:13:25 crc kubenswrapper[4846]: I1201 00:13:25.419708 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:13:25 crc kubenswrapper[4846]: I1201 00:13:25.419779 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:13:25 crc kubenswrapper[4846]: I1201 00:13:25.420714 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ac2e5c683905e3f4d0a34f11ca9603ade698a0381b398171743ea10eb159b79"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:13:25 crc kubenswrapper[4846]: I1201 00:13:25.420803 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://3ac2e5c683905e3f4d0a34f11ca9603ade698a0381b398171743ea10eb159b79" gracePeriod=600 Dec 01 00:13:26 crc kubenswrapper[4846]: I1201 00:13:26.483282 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="3ac2e5c683905e3f4d0a34f11ca9603ade698a0381b398171743ea10eb159b79" exitCode=0 Dec 01 00:13:26 crc kubenswrapper[4846]: I1201 00:13:26.483352 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"3ac2e5c683905e3f4d0a34f11ca9603ade698a0381b398171743ea10eb159b79"} Dec 01 00:13:26 crc kubenswrapper[4846]: I1201 00:13:26.484211 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"bf97e1048be4b0c031fde4887c47a3c0d4fc2d3018cc03b3d309a8d4b1baba7c"} Dec 01 00:13:26 crc kubenswrapper[4846]: I1201 00:13:26.484244 4846 scope.go:117] "RemoveContainer" containerID="e7161678637eccfbbd445353bfded6eab8f514d350508502498e94f36cfc790b" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.024105 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-694644c558-4c7m2"] Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.025056 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" podUID="a5748c7d-abab-4eb8-b13a-25332a10b39f" containerName="controller-manager" containerID="cri-o://b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77" gracePeriod=30 Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.136802 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57"] Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.137104 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" podUID="870b8241-4d29-4b15-965a-41d992b0ed38" containerName="route-controller-manager" containerID="cri-o://7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce" gracePeriod=30 Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.440886 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.511752 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.558294 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-proxy-ca-bundles\") pod \"a5748c7d-abab-4eb8-b13a-25332a10b39f\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.558452 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knt6j\" (UniqueName: \"kubernetes.io/projected/a5748c7d-abab-4eb8-b13a-25332a10b39f-kube-api-access-knt6j\") pod \"a5748c7d-abab-4eb8-b13a-25332a10b39f\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.558481 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5748c7d-abab-4eb8-b13a-25332a10b39f-serving-cert\") pod \"a5748c7d-abab-4eb8-b13a-25332a10b39f\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.558517 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-client-ca\") pod \"a5748c7d-abab-4eb8-b13a-25332a10b39f\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.559366 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a5748c7d-abab-4eb8-b13a-25332a10b39f" (UID: "a5748c7d-abab-4eb8-b13a-25332a10b39f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.559563 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-client-ca" (OuterVolumeSpecName: "client-ca") pod "a5748c7d-abab-4eb8-b13a-25332a10b39f" (UID: "a5748c7d-abab-4eb8-b13a-25332a10b39f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.559751 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-config\") pod \"a5748c7d-abab-4eb8-b13a-25332a10b39f\" (UID: \"a5748c7d-abab-4eb8-b13a-25332a10b39f\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.560044 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.560066 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.560477 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-config" (OuterVolumeSpecName: "config") pod "a5748c7d-abab-4eb8-b13a-25332a10b39f" (UID: "a5748c7d-abab-4eb8-b13a-25332a10b39f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.564615 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5748c7d-abab-4eb8-b13a-25332a10b39f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a5748c7d-abab-4eb8-b13a-25332a10b39f" (UID: "a5748c7d-abab-4eb8-b13a-25332a10b39f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.564883 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5748c7d-abab-4eb8-b13a-25332a10b39f-kube-api-access-knt6j" (OuterVolumeSpecName: "kube-api-access-knt6j") pod "a5748c7d-abab-4eb8-b13a-25332a10b39f" (UID: "a5748c7d-abab-4eb8-b13a-25332a10b39f"). InnerVolumeSpecName "kube-api-access-knt6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.651007 4846 generic.go:334] "Generic (PLEG): container finished" podID="870b8241-4d29-4b15-965a-41d992b0ed38" containerID="7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce" exitCode=0 Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.651100 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" event={"ID":"870b8241-4d29-4b15-965a-41d992b0ed38","Type":"ContainerDied","Data":"7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce"} Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.651209 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" event={"ID":"870b8241-4d29-4b15-965a-41d992b0ed38","Type":"ContainerDied","Data":"a83de0d77922b880061c62aee1321a7dbcb45bffd309c663639c191bbe19f043"} Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.651238 4846 scope.go:117] "RemoveContainer" containerID="7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.651235 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.652592 4846 generic.go:334] "Generic (PLEG): container finished" podID="a5748c7d-abab-4eb8-b13a-25332a10b39f" containerID="b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77" exitCode=0 Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.652672 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.652671 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" event={"ID":"a5748c7d-abab-4eb8-b13a-25332a10b39f","Type":"ContainerDied","Data":"b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77"} Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.652754 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-694644c558-4c7m2" event={"ID":"a5748c7d-abab-4eb8-b13a-25332a10b39f","Type":"ContainerDied","Data":"bd94298550ccdc2ab62ad164db67b3ca41249659da2692ef67124a623df76e11"} Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.661304 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt6hw\" (UniqueName: \"kubernetes.io/projected/870b8241-4d29-4b15-965a-41d992b0ed38-kube-api-access-zt6hw\") pod \"870b8241-4d29-4b15-965a-41d992b0ed38\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.661388 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-config\") pod \"870b8241-4d29-4b15-965a-41d992b0ed38\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.661479 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/870b8241-4d29-4b15-965a-41d992b0ed38-serving-cert\") pod \"870b8241-4d29-4b15-965a-41d992b0ed38\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.661552 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-client-ca\") pod \"870b8241-4d29-4b15-965a-41d992b0ed38\" (UID: \"870b8241-4d29-4b15-965a-41d992b0ed38\") " Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.662483 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-client-ca" (OuterVolumeSpecName: "client-ca") pod "870b8241-4d29-4b15-965a-41d992b0ed38" (UID: "870b8241-4d29-4b15-965a-41d992b0ed38"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.662589 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-config" (OuterVolumeSpecName: "config") pod "870b8241-4d29-4b15-965a-41d992b0ed38" (UID: "870b8241-4d29-4b15-965a-41d992b0ed38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.663104 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5748c7d-abab-4eb8-b13a-25332a10b39f-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.663139 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.663156 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870b8241-4d29-4b15-965a-41d992b0ed38-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.663169 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knt6j\" (UniqueName: \"kubernetes.io/projected/a5748c7d-abab-4eb8-b13a-25332a10b39f-kube-api-access-knt6j\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.663180 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5748c7d-abab-4eb8-b13a-25332a10b39f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.666432 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870b8241-4d29-4b15-965a-41d992b0ed38-kube-api-access-zt6hw" (OuterVolumeSpecName: "kube-api-access-zt6hw") pod "870b8241-4d29-4b15-965a-41d992b0ed38" (UID: "870b8241-4d29-4b15-965a-41d992b0ed38"). InnerVolumeSpecName "kube-api-access-zt6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.666810 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870b8241-4d29-4b15-965a-41d992b0ed38-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "870b8241-4d29-4b15-965a-41d992b0ed38" (UID: "870b8241-4d29-4b15-965a-41d992b0ed38"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.696879 4846 scope.go:117] "RemoveContainer" containerID="7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce" Dec 01 00:13:54 crc kubenswrapper[4846]: E1201 00:13:54.698627 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce\": container with ID starting with 7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce not found: ID does not exist" containerID="7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.698805 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce"} err="failed to get container status \"7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce\": rpc error: code = NotFound desc = could not find container \"7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce\": container with ID starting with 7d700d85463682162b81d52f9287ad48bb6670422995f64451556d11a1df4dce not found: ID does not exist" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.698950 4846 scope.go:117] "RemoveContainer" containerID="b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.706081 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-694644c558-4c7m2"] Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.715550 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-694644c558-4c7m2"] Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.741927 4846 scope.go:117] "RemoveContainer" containerID="b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77" Dec 01 00:13:54 crc kubenswrapper[4846]: E1201 00:13:54.746559 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77\": container with ID starting with b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77 not found: ID does not exist" containerID="b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.746591 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77"} err="failed to get container status \"b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77\": rpc error: code = NotFound desc = could not find container \"b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77\": container with ID starting with b70e8d2549f2e36d621755bb3622889634e217cce934e0506e7b1113cdd68a77 not found: ID does not exist" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.764712 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt6hw\" (UniqueName: \"kubernetes.io/projected/870b8241-4d29-4b15-965a-41d992b0ed38-kube-api-access-zt6hw\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.764760 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/870b8241-4d29-4b15-965a-41d992b0ed38-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.993302 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57"] Dec 01 00:13:54 crc kubenswrapper[4846]: I1201 00:13:54.997601 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bdfb75977-7zv57"] Dec 01 00:13:55 crc kubenswrapper[4846]: I1201 00:13:55.542438 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lq6bl"] Dec 01 00:13:55 crc kubenswrapper[4846]: I1201 00:13:55.590934 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870b8241-4d29-4b15-965a-41d992b0ed38" path="/var/lib/kubelet/pods/870b8241-4d29-4b15-965a-41d992b0ed38/volumes" Dec 01 00:13:55 crc kubenswrapper[4846]: I1201 00:13:55.591823 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5748c7d-abab-4eb8-b13a-25332a10b39f" path="/var/lib/kubelet/pods/a5748c7d-abab-4eb8-b13a-25332a10b39f/volumes" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045532 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f5578755f-jbt78"] Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045833 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="extract-content" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045851 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="extract-content" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045867 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="registry-server" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045876 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="registry-server" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045891 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870b8241-4d29-4b15-965a-41d992b0ed38" containerName="route-controller-manager" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045899 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="870b8241-4d29-4b15-965a-41d992b0ed38" containerName="route-controller-manager" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045914 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="extract-utilities" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045923 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="extract-utilities" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045931 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="extract-content" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045938 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="extract-content" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045950 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="registry-server" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.045980 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="registry-server" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.045997 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="extract-utilities" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046006 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="extract-utilities" Dec 01 00:13:56 crc kubenswrapper[4846]: E1201 00:13:56.046020 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5748c7d-abab-4eb8-b13a-25332a10b39f" containerName="controller-manager" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046028 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5748c7d-abab-4eb8-b13a-25332a10b39f" containerName="controller-manager" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046146 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="26dc41c0-b440-494e-9bfa-2a70f3e16040" containerName="registry-server" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046157 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="870b8241-4d29-4b15-965a-41d992b0ed38" containerName="route-controller-manager" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046172 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba05fe92-6ff2-4f5e-9f60-2948afa23445" containerName="registry-server" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046182 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5748c7d-abab-4eb8-b13a-25332a10b39f" containerName="controller-manager" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.046709 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.049440 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.049673 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv"] Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.050021 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.050328 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.050774 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.051080 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.053003 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.054502 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.055205 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.056960 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.057224 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.057775 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.057792 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.061613 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv"] Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.067802 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.068216 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.070496 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f5578755f-jbt78"] Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.181947 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-config\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182044 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-client-ca\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182086 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg8cj\" (UniqueName: \"kubernetes.io/projected/ba43e5c3-df28-47a6-8163-69fc0ddd7385-kube-api-access-xg8cj\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182113 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-config\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182140 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z28g4\" (UniqueName: \"kubernetes.io/projected/bddf3bdc-5db7-4760-a171-587d96fe07c8-kube-api-access-z28g4\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182231 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-client-ca\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182268 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-proxy-ca-bundles\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182295 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddf3bdc-5db7-4760-a171-587d96fe07c8-serving-cert\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.182323 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba43e5c3-df28-47a6-8163-69fc0ddd7385-serving-cert\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284041 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba43e5c3-df28-47a6-8163-69fc0ddd7385-serving-cert\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284122 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-config\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284181 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-client-ca\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284211 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg8cj\" (UniqueName: \"kubernetes.io/projected/ba43e5c3-df28-47a6-8163-69fc0ddd7385-kube-api-access-xg8cj\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284246 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-config\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284273 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z28g4\" (UniqueName: \"kubernetes.io/projected/bddf3bdc-5db7-4760-a171-587d96fe07c8-kube-api-access-z28g4\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284311 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-client-ca\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284339 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-proxy-ca-bundles\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.284370 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddf3bdc-5db7-4760-a171-587d96fe07c8-serving-cert\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.286025 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-client-ca\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.286222 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-config\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.287039 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-proxy-ca-bundles\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.287150 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-config\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.288786 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-client-ca\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.291495 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddf3bdc-5db7-4760-a171-587d96fe07c8-serving-cert\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.292264 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba43e5c3-df28-47a6-8163-69fc0ddd7385-serving-cert\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.304633 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg8cj\" (UniqueName: \"kubernetes.io/projected/ba43e5c3-df28-47a6-8163-69fc0ddd7385-kube-api-access-xg8cj\") pod \"route-controller-manager-c6975fb67-hlthv\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.308476 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z28g4\" (UniqueName: \"kubernetes.io/projected/bddf3bdc-5db7-4760-a171-587d96fe07c8-kube-api-access-z28g4\") pod \"controller-manager-5f5578755f-jbt78\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.366197 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.382143 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.661100 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv"] Dec 01 00:13:56 crc kubenswrapper[4846]: W1201 00:13:56.671067 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba43e5c3_df28_47a6_8163_69fc0ddd7385.slice/crio-754a3462e8ae0f51eb055738c58a22a88faca9b341d588136519afbf9b0e0419 WatchSource:0}: Error finding container 754a3462e8ae0f51eb055738c58a22a88faca9b341d588136519afbf9b0e0419: Status 404 returned error can't find the container with id 754a3462e8ae0f51eb055738c58a22a88faca9b341d588136519afbf9b0e0419 Dec 01 00:13:56 crc kubenswrapper[4846]: I1201 00:13:56.799926 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f5578755f-jbt78"] Dec 01 00:13:56 crc kubenswrapper[4846]: W1201 00:13:56.803630 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbddf3bdc_5db7_4760_a171_587d96fe07c8.slice/crio-68604cc4f58718c7f9ae6c17f1c77638b60e82a8fd53062fcc915fbb1b46f6e5 WatchSource:0}: Error finding container 68604cc4f58718c7f9ae6c17f1c77638b60e82a8fd53062fcc915fbb1b46f6e5: Status 404 returned error can't find the container with id 68604cc4f58718c7f9ae6c17f1c77638b60e82a8fd53062fcc915fbb1b46f6e5 Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.673699 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" event={"ID":"ba43e5c3-df28-47a6-8163-69fc0ddd7385","Type":"ContainerStarted","Data":"20d16a3b96bbefe40b61f72df857b5dafb9cd0171a95d2d413186f57daf383bb"} Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.674077 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.674096 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" event={"ID":"ba43e5c3-df28-47a6-8163-69fc0ddd7385","Type":"ContainerStarted","Data":"754a3462e8ae0f51eb055738c58a22a88faca9b341d588136519afbf9b0e0419"} Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.674845 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" event={"ID":"bddf3bdc-5db7-4760-a171-587d96fe07c8","Type":"ContainerStarted","Data":"81e0e2bd62fd4b3556ad02a09c1c350e4617e4980427ef3e4a94a1fb3cead2f7"} Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.674898 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" event={"ID":"bddf3bdc-5db7-4760-a171-587d96fe07c8","Type":"ContainerStarted","Data":"68604cc4f58718c7f9ae6c17f1c77638b60e82a8fd53062fcc915fbb1b46f6e5"} Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.675064 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.680728 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.684191 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.696269 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" podStartSLOduration=3.696251187 podStartE2EDuration="3.696251187s" podCreationTimestamp="2025-12-01 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:13:57.694182516 +0000 UTC m=+458.474951590" watchObservedRunningTime="2025-12-01 00:13:57.696251187 +0000 UTC m=+458.477020261" Dec 01 00:13:57 crc kubenswrapper[4846]: I1201 00:13:57.713958 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" podStartSLOduration=3.713935967 podStartE2EDuration="3.713935967s" podCreationTimestamp="2025-12-01 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:13:57.713650189 +0000 UTC m=+458.494419283" watchObservedRunningTime="2025-12-01 00:13:57.713935967 +0000 UTC m=+458.494705041" Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.007431 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f5578755f-jbt78"] Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.008474 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" podUID="bddf3bdc-5db7-4760-a171-587d96fe07c8" containerName="controller-manager" containerID="cri-o://81e0e2bd62fd4b3556ad02a09c1c350e4617e4980427ef3e4a94a1fb3cead2f7" gracePeriod=30 Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.020188 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv"] Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.020400 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" podUID="ba43e5c3-df28-47a6-8163-69fc0ddd7385" containerName="route-controller-manager" containerID="cri-o://20d16a3b96bbefe40b61f72df857b5dafb9cd0171a95d2d413186f57daf383bb" gracePeriod=30 Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.791466 4846 generic.go:334] "Generic (PLEG): container finished" podID="bddf3bdc-5db7-4760-a171-587d96fe07c8" containerID="81e0e2bd62fd4b3556ad02a09c1c350e4617e4980427ef3e4a94a1fb3cead2f7" exitCode=0 Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.791609 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" event={"ID":"bddf3bdc-5db7-4760-a171-587d96fe07c8","Type":"ContainerDied","Data":"81e0e2bd62fd4b3556ad02a09c1c350e4617e4980427ef3e4a94a1fb3cead2f7"} Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.795283 4846 generic.go:334] "Generic (PLEG): container finished" podID="ba43e5c3-df28-47a6-8163-69fc0ddd7385" containerID="20d16a3b96bbefe40b61f72df857b5dafb9cd0171a95d2d413186f57daf383bb" exitCode=0 Dec 01 00:14:14 crc kubenswrapper[4846]: I1201 00:14:14.795324 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" event={"ID":"ba43e5c3-df28-47a6-8163-69fc0ddd7385","Type":"ContainerDied","Data":"20d16a3b96bbefe40b61f72df857b5dafb9cd0171a95d2d413186f57daf383bb"} Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.143090 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.147973 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.190966 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8"] Dec 01 00:14:15 crc kubenswrapper[4846]: E1201 00:14:15.191235 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba43e5c3-df28-47a6-8163-69fc0ddd7385" containerName="route-controller-manager" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.191247 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba43e5c3-df28-47a6-8163-69fc0ddd7385" containerName="route-controller-manager" Dec 01 00:14:15 crc kubenswrapper[4846]: E1201 00:14:15.191273 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddf3bdc-5db7-4760-a171-587d96fe07c8" containerName="controller-manager" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.191279 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddf3bdc-5db7-4760-a171-587d96fe07c8" containerName="controller-manager" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.191360 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddf3bdc-5db7-4760-a171-587d96fe07c8" containerName="controller-manager" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.191372 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba43e5c3-df28-47a6-8163-69fc0ddd7385" containerName="route-controller-manager" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.192098 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.200192 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8"] Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248615 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-config\") pod \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248719 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z28g4\" (UniqueName: \"kubernetes.io/projected/bddf3bdc-5db7-4760-a171-587d96fe07c8-kube-api-access-z28g4\") pod \"bddf3bdc-5db7-4760-a171-587d96fe07c8\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248742 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-client-ca\") pod \"bddf3bdc-5db7-4760-a171-587d96fe07c8\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248773 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-client-ca\") pod \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248812 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba43e5c3-df28-47a6-8163-69fc0ddd7385-serving-cert\") pod \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248831 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-config\") pod \"bddf3bdc-5db7-4760-a171-587d96fe07c8\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248851 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg8cj\" (UniqueName: \"kubernetes.io/projected/ba43e5c3-df28-47a6-8163-69fc0ddd7385-kube-api-access-xg8cj\") pod \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\" (UID: \"ba43e5c3-df28-47a6-8163-69fc0ddd7385\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248877 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddf3bdc-5db7-4760-a171-587d96fe07c8-serving-cert\") pod \"bddf3bdc-5db7-4760-a171-587d96fe07c8\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.248905 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-proxy-ca-bundles\") pod \"bddf3bdc-5db7-4760-a171-587d96fe07c8\" (UID: \"bddf3bdc-5db7-4760-a171-587d96fe07c8\") " Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.249844 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba43e5c3-df28-47a6-8163-69fc0ddd7385" (UID: "ba43e5c3-df28-47a6-8163-69fc0ddd7385"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.249894 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-config" (OuterVolumeSpecName: "config") pod "bddf3bdc-5db7-4760-a171-587d96fe07c8" (UID: "bddf3bdc-5db7-4760-a171-587d96fe07c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.250421 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "bddf3bdc-5db7-4760-a171-587d96fe07c8" (UID: "bddf3bdc-5db7-4760-a171-587d96fe07c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.250875 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bddf3bdc-5db7-4760-a171-587d96fe07c8" (UID: "bddf3bdc-5db7-4760-a171-587d96fe07c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.250976 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-config" (OuterVolumeSpecName: "config") pod "ba43e5c3-df28-47a6-8163-69fc0ddd7385" (UID: "ba43e5c3-df28-47a6-8163-69fc0ddd7385"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.254140 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddf3bdc-5db7-4760-a171-587d96fe07c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bddf3bdc-5db7-4760-a171-587d96fe07c8" (UID: "bddf3bdc-5db7-4760-a171-587d96fe07c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.254291 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba43e5c3-df28-47a6-8163-69fc0ddd7385-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba43e5c3-df28-47a6-8163-69fc0ddd7385" (UID: "ba43e5c3-df28-47a6-8163-69fc0ddd7385"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.259412 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba43e5c3-df28-47a6-8163-69fc0ddd7385-kube-api-access-xg8cj" (OuterVolumeSpecName: "kube-api-access-xg8cj") pod "ba43e5c3-df28-47a6-8163-69fc0ddd7385" (UID: "ba43e5c3-df28-47a6-8163-69fc0ddd7385"). InnerVolumeSpecName "kube-api-access-xg8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.260954 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddf3bdc-5db7-4760-a171-587d96fe07c8-kube-api-access-z28g4" (OuterVolumeSpecName: "kube-api-access-z28g4") pod "bddf3bdc-5db7-4760-a171-587d96fe07c8" (UID: "bddf3bdc-5db7-4760-a171-587d96fe07c8"). InnerVolumeSpecName "kube-api-access-z28g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350394 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfpk\" (UniqueName: \"kubernetes.io/projected/1312a910-7551-4a34-a528-b73d5a4c57e0-kube-api-access-vrfpk\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350460 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1312a910-7551-4a34-a528-b73d5a4c57e0-serving-cert\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350575 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1312a910-7551-4a34-a528-b73d5a4c57e0-client-ca\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350727 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1312a910-7551-4a34-a528-b73d5a4c57e0-config\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350843 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350860 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z28g4\" (UniqueName: \"kubernetes.io/projected/bddf3bdc-5db7-4760-a171-587d96fe07c8-kube-api-access-z28g4\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350872 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350883 4846 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba43e5c3-df28-47a6-8163-69fc0ddd7385-client-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350892 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba43e5c3-df28-47a6-8163-69fc0ddd7385-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350900 4846 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350909 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg8cj\" (UniqueName: \"kubernetes.io/projected/ba43e5c3-df28-47a6-8163-69fc0ddd7385-kube-api-access-xg8cj\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350918 4846 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bddf3bdc-5db7-4760-a171-587d96fe07c8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.350927 4846 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bddf3bdc-5db7-4760-a171-587d96fe07c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.451597 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1312a910-7551-4a34-a528-b73d5a4c57e0-serving-cert\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.451670 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1312a910-7551-4a34-a528-b73d5a4c57e0-client-ca\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.451738 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1312a910-7551-4a34-a528-b73d5a4c57e0-config\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.451822 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfpk\" (UniqueName: \"kubernetes.io/projected/1312a910-7551-4a34-a528-b73d5a4c57e0-kube-api-access-vrfpk\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.454157 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1312a910-7551-4a34-a528-b73d5a4c57e0-config\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.455309 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1312a910-7551-4a34-a528-b73d5a4c57e0-client-ca\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.456091 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1312a910-7551-4a34-a528-b73d5a4c57e0-serving-cert\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.469904 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfpk\" (UniqueName: \"kubernetes.io/projected/1312a910-7551-4a34-a528-b73d5a4c57e0-kube-api-access-vrfpk\") pod \"route-controller-manager-598797d5dc-bhpg8\" (UID: \"1312a910-7551-4a34-a528-b73d5a4c57e0\") " pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.513172 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.802781 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.802755 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv" event={"ID":"ba43e5c3-df28-47a6-8163-69fc0ddd7385","Type":"ContainerDied","Data":"754a3462e8ae0f51eb055738c58a22a88faca9b341d588136519afbf9b0e0419"} Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.803238 4846 scope.go:117] "RemoveContainer" containerID="20d16a3b96bbefe40b61f72df857b5dafb9cd0171a95d2d413186f57daf383bb" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.804868 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" event={"ID":"bddf3bdc-5db7-4760-a171-587d96fe07c8","Type":"ContainerDied","Data":"68604cc4f58718c7f9ae6c17f1c77638b60e82a8fd53062fcc915fbb1b46f6e5"} Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.804908 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f5578755f-jbt78" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.821927 4846 scope.go:117] "RemoveContainer" containerID="81e0e2bd62fd4b3556ad02a09c1c350e4617e4980427ef3e4a94a1fb3cead2f7" Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.831194 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f5578755f-jbt78"] Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.836084 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f5578755f-jbt78"] Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.840199 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv"] Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.844159 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c6975fb67-hlthv"] Dec 01 00:14:15 crc kubenswrapper[4846]: I1201 00:14:15.978764 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8"] Dec 01 00:14:15 crc kubenswrapper[4846]: W1201 00:14:15.994490 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1312a910_7551_4a34_a528_b73d5a4c57e0.slice/crio-0b0c7b3130616b8a27fa9167d306dfa0f4bd18247abf107b691c2593d7c76a18 WatchSource:0}: Error finding container 0b0c7b3130616b8a27fa9167d306dfa0f4bd18247abf107b691c2593d7c76a18: Status 404 returned error can't find the container with id 0b0c7b3130616b8a27fa9167d306dfa0f4bd18247abf107b691c2593d7c76a18 Dec 01 00:14:16 crc kubenswrapper[4846]: I1201 00:14:16.816676 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" event={"ID":"1312a910-7551-4a34-a528-b73d5a4c57e0","Type":"ContainerStarted","Data":"251adb7231c7f0b7dc965f532df59f6f4eb31b31774981b9d36cc46e20f41fe8"} Dec 01 00:14:16 crc kubenswrapper[4846]: I1201 00:14:16.816838 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:16 crc kubenswrapper[4846]: I1201 00:14:16.816870 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" event={"ID":"1312a910-7551-4a34-a528-b73d5a4c57e0","Type":"ContainerStarted","Data":"0b0c7b3130616b8a27fa9167d306dfa0f4bd18247abf107b691c2593d7c76a18"} Dec 01 00:14:16 crc kubenswrapper[4846]: I1201 00:14:16.824982 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" Dec 01 00:14:16 crc kubenswrapper[4846]: I1201 00:14:16.853263 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-598797d5dc-bhpg8" podStartSLOduration=2.853235965 podStartE2EDuration="2.853235965s" podCreationTimestamp="2025-12-01 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:14:16.844265892 +0000 UTC m=+477.625034996" watchObservedRunningTime="2025-12-01 00:14:16.853235965 +0000 UTC m=+477.634005049" Dec 01 00:14:17 crc kubenswrapper[4846]: I1201 00:14:17.594573 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba43e5c3-df28-47a6-8163-69fc0ddd7385" path="/var/lib/kubelet/pods/ba43e5c3-df28-47a6-8163-69fc0ddd7385/volumes" Dec 01 00:14:17 crc kubenswrapper[4846]: I1201 00:14:17.596406 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddf3bdc-5db7-4760-a171-587d96fe07c8" path="/var/lib/kubelet/pods/bddf3bdc-5db7-4760-a171-587d96fe07c8/volumes" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.064661 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-544fd96b78-std4n"] Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.067020 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.070935 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.071473 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.071498 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.072088 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.072859 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.075811 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.086175 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.091171 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-544fd96b78-std4n"] Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.189667 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b039db93-903a-436e-916a-d1ed470e7192-serving-cert\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.189739 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-client-ca\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.189825 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-proxy-ca-bundles\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.189854 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hkf\" (UniqueName: \"kubernetes.io/projected/b039db93-903a-436e-916a-d1ed470e7192-kube-api-access-m7hkf\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.189882 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-config\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.291283 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b039db93-903a-436e-916a-d1ed470e7192-serving-cert\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.291335 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-client-ca\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.291380 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-proxy-ca-bundles\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.291400 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7hkf\" (UniqueName: \"kubernetes.io/projected/b039db93-903a-436e-916a-d1ed470e7192-kube-api-access-m7hkf\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.291422 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-config\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.292514 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-client-ca\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.292765 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-config\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.292986 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b039db93-903a-436e-916a-d1ed470e7192-proxy-ca-bundles\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.301477 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b039db93-903a-436e-916a-d1ed470e7192-serving-cert\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.307287 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7hkf\" (UniqueName: \"kubernetes.io/projected/b039db93-903a-436e-916a-d1ed470e7192-kube-api-access-m7hkf\") pod \"controller-manager-544fd96b78-std4n\" (UID: \"b039db93-903a-436e-916a-d1ed470e7192\") " pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.435742 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:18 crc kubenswrapper[4846]: I1201 00:14:18.843954 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-544fd96b78-std4n"] Dec 01 00:14:18 crc kubenswrapper[4846]: W1201 00:14:18.850535 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb039db93_903a_436e_916a_d1ed470e7192.slice/crio-2d6937731ece6c36d41c739b28825dd867f0e7e875d47e998b28c1600300de66 WatchSource:0}: Error finding container 2d6937731ece6c36d41c739b28825dd867f0e7e875d47e998b28c1600300de66: Status 404 returned error can't find the container with id 2d6937731ece6c36d41c739b28825dd867f0e7e875d47e998b28c1600300de66 Dec 01 00:14:19 crc kubenswrapper[4846]: I1201 00:14:19.844632 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" event={"ID":"b039db93-903a-436e-916a-d1ed470e7192","Type":"ContainerStarted","Data":"2b55e061a9b9f44f3b9ec8383fb97cf3b35a589f6061c88e4efc907a26f6f0fe"} Dec 01 00:14:19 crc kubenswrapper[4846]: I1201 00:14:19.845117 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" event={"ID":"b039db93-903a-436e-916a-d1ed470e7192","Type":"ContainerStarted","Data":"2d6937731ece6c36d41c739b28825dd867f0e7e875d47e998b28c1600300de66"} Dec 01 00:14:19 crc kubenswrapper[4846]: I1201 00:14:19.845664 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:19 crc kubenswrapper[4846]: I1201 00:14:19.850207 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" Dec 01 00:14:19 crc kubenswrapper[4846]: I1201 00:14:19.866387 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-544fd96b78-std4n" podStartSLOduration=5.86636236 podStartE2EDuration="5.86636236s" podCreationTimestamp="2025-12-01 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:14:19.863357611 +0000 UTC m=+480.644126705" watchObservedRunningTime="2025-12-01 00:14:19.86636236 +0000 UTC m=+480.647131434" Dec 01 00:14:20 crc kubenswrapper[4846]: I1201 00:14:20.584024 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerName="oauth-openshift" containerID="cri-o://54460d4e253bb966291bdde71aff3717ab198e7fe1019f2e2c4a4aba519f3a1f" gracePeriod=15 Dec 01 00:14:20 crc kubenswrapper[4846]: I1201 00:14:20.852977 4846 generic.go:334] "Generic (PLEG): container finished" podID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerID="54460d4e253bb966291bdde71aff3717ab198e7fe1019f2e2c4a4aba519f3a1f" exitCode=0 Dec 01 00:14:20 crc kubenswrapper[4846]: I1201 00:14:20.853066 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" event={"ID":"b7519ac9-b09f-4169-bf4d-b6ec5849661c","Type":"ContainerDied","Data":"54460d4e253bb966291bdde71aff3717ab198e7fe1019f2e2c4a4aba519f3a1f"} Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.482082 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635554 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-service-ca\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635631 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-dir\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635710 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-trusted-ca-bundle\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635735 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cp6z\" (UniqueName: \"kubernetes.io/projected/b7519ac9-b09f-4169-bf4d-b6ec5849661c-kube-api-access-7cp6z\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635765 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-login\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635795 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-session\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635784 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635840 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-serving-cert\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635877 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-error\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635900 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-cliconfig\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635927 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-ocp-branding-template\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635954 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-policies\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.635977 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-idp-0-file-data\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.636017 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-provider-selection\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.636063 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-router-certs\") pod \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\" (UID: \"b7519ac9-b09f-4169-bf4d-b6ec5849661c\") " Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.636328 4846 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.636952 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.637107 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.637625 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.637725 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.655901 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.656019 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7519ac9-b09f-4169-bf4d-b6ec5849661c-kube-api-access-7cp6z" (OuterVolumeSpecName: "kube-api-access-7cp6z") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "kube-api-access-7cp6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.656331 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.657296 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.658160 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.658187 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.658371 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.658459 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.658643 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7519ac9-b09f-4169-bf4d-b6ec5849661c" (UID: "b7519ac9-b09f-4169-bf4d-b6ec5849661c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737356 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737397 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737408 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cp6z\" (UniqueName: \"kubernetes.io/projected/b7519ac9-b09f-4169-bf4d-b6ec5849661c-kube-api-access-7cp6z\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737416 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737427 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737436 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737445 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737454 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737463 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737471 4846 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7519ac9-b09f-4169-bf4d-b6ec5849661c-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737482 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737492 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.737502 4846 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7519ac9-b09f-4169-bf4d-b6ec5849661c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.863539 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.863519 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lq6bl" event={"ID":"b7519ac9-b09f-4169-bf4d-b6ec5849661c","Type":"ContainerDied","Data":"8496e1e44adfcc127e9f739b88974bd3c9c6ceac432959149b0db9af97ef2b33"} Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.864196 4846 scope.go:117] "RemoveContainer" containerID="54460d4e253bb966291bdde71aff3717ab198e7fe1019f2e2c4a4aba519f3a1f" Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.901399 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lq6bl"] Dec 01 00:14:21 crc kubenswrapper[4846]: I1201 00:14:21.906412 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lq6bl"] Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.066444 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-bcdbc878b-29488"] Dec 01 00:14:22 crc kubenswrapper[4846]: E1201 00:14:22.066797 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerName="oauth-openshift" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.066815 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerName="oauth-openshift" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.066941 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" containerName="oauth-openshift" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.067652 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.070846 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.071234 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.071394 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.071465 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.071468 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.073077 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.073543 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.074071 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.074253 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.074663 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.100914 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.112075 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.116109 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.117759 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.124049 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bcdbc878b-29488"] Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.130263 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.142526 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-login\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.142693 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.142827 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.142926 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-audit-policies\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143041 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-error\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143156 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143263 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143354 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-router-certs\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143448 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-service-ca\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143533 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143622 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-session\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143707 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d48a6b9b-1a95-419a-8f0f-52458a0b2236-audit-dir\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143786 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjwl\" (UniqueName: \"kubernetes.io/projected/d48a6b9b-1a95-419a-8f0f-52458a0b2236-kube-api-access-fsjwl\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.143881 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244771 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244834 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-router-certs\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244858 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-service-ca\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244882 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244928 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-session\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244958 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d48a6b9b-1a95-419a-8f0f-52458a0b2236-audit-dir\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.244983 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsjwl\" (UniqueName: \"kubernetes.io/projected/d48a6b9b-1a95-419a-8f0f-52458a0b2236-kube-api-access-fsjwl\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245007 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245038 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-login\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245063 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245087 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245108 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-audit-policies\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245127 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-error\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245159 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.245567 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d48a6b9b-1a95-419a-8f0f-52458a0b2236-audit-dir\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.246237 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.247193 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-audit-policies\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.247647 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.250485 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.250525 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.251514 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.251796 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-session\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.251879 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-service-ca\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.252087 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-login\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.253183 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-user-template-error\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.254163 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-router-certs\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.260649 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d48a6b9b-1a95-419a-8f0f-52458a0b2236-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.263353 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsjwl\" (UniqueName: \"kubernetes.io/projected/d48a6b9b-1a95-419a-8f0f-52458a0b2236-kube-api-access-fsjwl\") pod \"oauth-openshift-bcdbc878b-29488\" (UID: \"d48a6b9b-1a95-419a-8f0f-52458a0b2236\") " pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.425200 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.852230 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bcdbc878b-29488"] Dec 01 00:14:22 crc kubenswrapper[4846]: W1201 00:14:22.856049 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd48a6b9b_1a95_419a_8f0f_52458a0b2236.slice/crio-7dfee3aa5b1ff903761f500129bb2ce9ba05674ef77debde0fd397b473644c65 WatchSource:0}: Error finding container 7dfee3aa5b1ff903761f500129bb2ce9ba05674ef77debde0fd397b473644c65: Status 404 returned error can't find the container with id 7dfee3aa5b1ff903761f500129bb2ce9ba05674ef77debde0fd397b473644c65 Dec 01 00:14:22 crc kubenswrapper[4846]: I1201 00:14:22.873784 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" event={"ID":"d48a6b9b-1a95-419a-8f0f-52458a0b2236","Type":"ContainerStarted","Data":"7dfee3aa5b1ff903761f500129bb2ce9ba05674ef77debde0fd397b473644c65"} Dec 01 00:14:23 crc kubenswrapper[4846]: I1201 00:14:23.587712 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7519ac9-b09f-4169-bf4d-b6ec5849661c" path="/var/lib/kubelet/pods/b7519ac9-b09f-4169-bf4d-b6ec5849661c/volumes" Dec 01 00:14:23 crc kubenswrapper[4846]: I1201 00:14:23.882289 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" event={"ID":"d48a6b9b-1a95-419a-8f0f-52458a0b2236","Type":"ContainerStarted","Data":"d0c4325e591192f9937b2fbe617aaf1a2ec8e3450f4e072580033df3fc8700e6"} Dec 01 00:14:23 crc kubenswrapper[4846]: I1201 00:14:23.883150 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:23 crc kubenswrapper[4846]: I1201 00:14:23.888211 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" Dec 01 00:14:23 crc kubenswrapper[4846]: I1201 00:14:23.913660 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-bcdbc878b-29488" podStartSLOduration=28.913629717 podStartE2EDuration="28.913629717s" podCreationTimestamp="2025-12-01 00:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:14:23.907481067 +0000 UTC m=+484.688250151" watchObservedRunningTime="2025-12-01 00:14:23.913629717 +0000 UTC m=+484.694398831" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.238652 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjt44"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.239627 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xjt44" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="registry-server" containerID="cri-o://ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db" gracePeriod=30 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.247395 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w968z"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.248097 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w968z" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="registry-server" containerID="cri-o://bed8de3152113eccad10a290f691d45faf809d7b44c90a61b8355d777f9420c0" gracePeriod=30 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.275787 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6rhj6"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.277384 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" containerID="cri-o://fb256a79867e152329e636d00b10b6928740a801fd49f3320e5d2338f97c013f" gracePeriod=30 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.281393 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdf9f"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.282151 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdf9f" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="registry-server" containerID="cri-o://da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea" gracePeriod=30 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.288193 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8j266"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.288516 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8j266" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="registry-server" containerID="cri-o://d631510b16c37b023cafbca2e96442c3415816af2cf2b3fa67ccf3aa9a8a1d7a" gracePeriod=30 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.298710 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pv8kk"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.300013 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.302745 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pv8kk"] Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.444130 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffcd812b-4d10-421c-a280-abb861c27dac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.444226 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffcd812b-4d10-421c-a280-abb861c27dac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.444305 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvll6\" (UniqueName: \"kubernetes.io/projected/ffcd812b-4d10-421c-a280-abb861c27dac-kube-api-access-tvll6\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: E1201 00:14:35.541193 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea is running failed: container process not found" containerID="da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:14:35 crc kubenswrapper[4846]: E1201 00:14:35.541885 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea is running failed: container process not found" containerID="da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:14:35 crc kubenswrapper[4846]: E1201 00:14:35.542592 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea is running failed: container process not found" containerID="da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:14:35 crc kubenswrapper[4846]: E1201 00:14:35.542646 4846 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-sdf9f" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="registry-server" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.545897 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvll6\" (UniqueName: \"kubernetes.io/projected/ffcd812b-4d10-421c-a280-abb861c27dac-kube-api-access-tvll6\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.545965 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffcd812b-4d10-421c-a280-abb861c27dac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.546017 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffcd812b-4d10-421c-a280-abb861c27dac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.547766 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffcd812b-4d10-421c-a280-abb861c27dac-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.553332 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffcd812b-4d10-421c-a280-abb861c27dac-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.563553 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvll6\" (UniqueName: \"kubernetes.io/projected/ffcd812b-4d10-421c-a280-abb861c27dac-kube-api-access-tvll6\") pod \"marketplace-operator-79b997595-pv8kk\" (UID: \"ffcd812b-4d10-421c-a280-abb861c27dac\") " pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.620341 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.865945 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.950521 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcv4t\" (UniqueName: \"kubernetes.io/projected/02d89d24-0fd5-41c1-a392-27a63409d1c3-kube-api-access-hcv4t\") pod \"02d89d24-0fd5-41c1-a392-27a63409d1c3\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.950702 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-utilities\") pod \"02d89d24-0fd5-41c1-a392-27a63409d1c3\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.950839 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-catalog-content\") pod \"02d89d24-0fd5-41c1-a392-27a63409d1c3\" (UID: \"02d89d24-0fd5-41c1-a392-27a63409d1c3\") " Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.951917 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-utilities" (OuterVolumeSpecName: "utilities") pod "02d89d24-0fd5-41c1-a392-27a63409d1c3" (UID: "02d89d24-0fd5-41c1-a392-27a63409d1c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.955837 4846 generic.go:334] "Generic (PLEG): container finished" podID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerID="ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db" exitCode=0 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.955916 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerDied","Data":"ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db"} Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.955957 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjt44" event={"ID":"02d89d24-0fd5-41c1-a392-27a63409d1c3","Type":"ContainerDied","Data":"7518d64a75c34dea9e43f11d9fbb6c5436bfbc0f6dbd04736f8aab6b9efb068b"} Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.955979 4846 scope.go:117] "RemoveContainer" containerID="ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.956125 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjt44" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.961540 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6rhj6_e31602df-d2bc-40de-93be-42600c22a9c1/marketplace-operator/1.log" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.961605 4846 generic.go:334] "Generic (PLEG): container finished" podID="e31602df-d2bc-40de-93be-42600c22a9c1" containerID="fb256a79867e152329e636d00b10b6928740a801fd49f3320e5d2338f97c013f" exitCode=0 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.961722 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerDied","Data":"fb256a79867e152329e636d00b10b6928740a801fd49f3320e5d2338f97c013f"} Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.966382 4846 generic.go:334] "Generic (PLEG): container finished" podID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerID="bed8de3152113eccad10a290f691d45faf809d7b44c90a61b8355d777f9420c0" exitCode=0 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.966453 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerDied","Data":"bed8de3152113eccad10a290f691d45faf809d7b44c90a61b8355d777f9420c0"} Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.969647 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerDied","Data":"d631510b16c37b023cafbca2e96442c3415816af2cf2b3fa67ccf3aa9a8a1d7a"} Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.969858 4846 generic.go:334] "Generic (PLEG): container finished" podID="54eba203-c984-4df6-91bc-ba04e655e541" containerID="d631510b16c37b023cafbca2e96442c3415816af2cf2b3fa67ccf3aa9a8a1d7a" exitCode=0 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.975130 4846 generic.go:334] "Generic (PLEG): container finished" podID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerID="da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea" exitCode=0 Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.975182 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdf9f" event={"ID":"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8","Type":"ContainerDied","Data":"da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea"} Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.988224 4846 scope.go:117] "RemoveContainer" containerID="b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c" Dec 01 00:14:35 crc kubenswrapper[4846]: I1201 00:14:35.990132 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d89d24-0fd5-41c1-a392-27a63409d1c3-kube-api-access-hcv4t" (OuterVolumeSpecName: "kube-api-access-hcv4t") pod "02d89d24-0fd5-41c1-a392-27a63409d1c3" (UID: "02d89d24-0fd5-41c1-a392-27a63409d1c3"). InnerVolumeSpecName "kube-api-access-hcv4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.006057 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02d89d24-0fd5-41c1-a392-27a63409d1c3" (UID: "02d89d24-0fd5-41c1-a392-27a63409d1c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.047748 4846 scope.go:117] "RemoveContainer" containerID="1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.054775 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcv4t\" (UniqueName: \"kubernetes.io/projected/02d89d24-0fd5-41c1-a392-27a63409d1c3-kube-api-access-hcv4t\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.054803 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.054816 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d89d24-0fd5-41c1-a392-27a63409d1c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.074955 4846 scope.go:117] "RemoveContainer" containerID="ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db" Dec 01 00:14:36 crc kubenswrapper[4846]: E1201 00:14:36.094717 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db\": container with ID starting with ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db not found: ID does not exist" containerID="ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.094792 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db"} err="failed to get container status \"ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db\": rpc error: code = NotFound desc = could not find container \"ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db\": container with ID starting with ef626505393f6a47ba50f12197ce5979ba1c4b6fbf37a7a5c6b58654e39ae4db not found: ID does not exist" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.094821 4846 scope.go:117] "RemoveContainer" containerID="b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c" Dec 01 00:14:36 crc kubenswrapper[4846]: E1201 00:14:36.101139 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c\": container with ID starting with b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c not found: ID does not exist" containerID="b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.101202 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c"} err="failed to get container status \"b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c\": rpc error: code = NotFound desc = could not find container \"b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c\": container with ID starting with b395913b8225c571c67a2b92ccf80589d557daa4ee496629496ec9dd1496229c not found: ID does not exist" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.101243 4846 scope.go:117] "RemoveContainer" containerID="1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba" Dec 01 00:14:36 crc kubenswrapper[4846]: E1201 00:14:36.103967 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba\": container with ID starting with 1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba not found: ID does not exist" containerID="1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.104000 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba"} err="failed to get container status \"1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba\": rpc error: code = NotFound desc = could not find container \"1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba\": container with ID starting with 1c7afff0f797595e8326b42e80c628d2a2779874f6aff3ee04a9410f2f2af6ba not found: ID does not exist" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.104047 4846 scope.go:117] "RemoveContainer" containerID="062e7a8e0ca79cd7b9cb440350fff98d51de69a413c2f35c5fe9b3c97c742018" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.183950 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.232946 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.236670 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.243523 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.257604 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics\") pod \"e31602df-d2bc-40de-93be-42600c22a9c1\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.257650 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca\") pod \"e31602df-d2bc-40de-93be-42600c22a9c1\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.257705 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws2bz\" (UniqueName: \"kubernetes.io/projected/e31602df-d2bc-40de-93be-42600c22a9c1-kube-api-access-ws2bz\") pod \"e31602df-d2bc-40de-93be-42600c22a9c1\" (UID: \"e31602df-d2bc-40de-93be-42600c22a9c1\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.258724 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e31602df-d2bc-40de-93be-42600c22a9c1" (UID: "e31602df-d2bc-40de-93be-42600c22a9c1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.262205 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e31602df-d2bc-40de-93be-42600c22a9c1-kube-api-access-ws2bz" (OuterVolumeSpecName: "kube-api-access-ws2bz") pod "e31602df-d2bc-40de-93be-42600c22a9c1" (UID: "e31602df-d2bc-40de-93be-42600c22a9c1"). InnerVolumeSpecName "kube-api-access-ws2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.263470 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e31602df-d2bc-40de-93be-42600c22a9c1" (UID: "e31602df-d2bc-40de-93be-42600c22a9c1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.312283 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjt44"] Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.318825 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xjt44"] Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358645 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-catalog-content\") pod \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358704 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-utilities\") pod \"54eba203-c984-4df6-91bc-ba04e655e541\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358726 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jctc\" (UniqueName: \"kubernetes.io/projected/54eba203-c984-4df6-91bc-ba04e655e541-kube-api-access-4jctc\") pod \"54eba203-c984-4df6-91bc-ba04e655e541\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358781 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-catalog-content\") pod \"54eba203-c984-4df6-91bc-ba04e655e541\" (UID: \"54eba203-c984-4df6-91bc-ba04e655e541\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358810 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-utilities\") pod \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358866 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98zqv\" (UniqueName: \"kubernetes.io/projected/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-kube-api-access-98zqv\") pod \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\" (UID: \"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358892 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5kns\" (UniqueName: \"kubernetes.io/projected/ad68a9ea-9986-4c5e-a87f-69f9c237a066-kube-api-access-x5kns\") pod \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358920 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-utilities\") pod \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.358945 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-catalog-content\") pod \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\" (UID: \"ad68a9ea-9986-4c5e-a87f-69f9c237a066\") " Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.359162 4846 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.359172 4846 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e31602df-d2bc-40de-93be-42600c22a9c1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.359183 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws2bz\" (UniqueName: \"kubernetes.io/projected/e31602df-d2bc-40de-93be-42600c22a9c1-kube-api-access-ws2bz\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.361655 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pv8kk"] Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.363893 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-utilities" (OuterVolumeSpecName: "utilities") pod "eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" (UID: "eb0df655-cdf4-4a30-bd80-2c6ac270d5b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.364017 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-utilities" (OuterVolumeSpecName: "utilities") pod "54eba203-c984-4df6-91bc-ba04e655e541" (UID: "54eba203-c984-4df6-91bc-ba04e655e541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.364420 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-utilities" (OuterVolumeSpecName: "utilities") pod "ad68a9ea-9986-4c5e-a87f-69f9c237a066" (UID: "ad68a9ea-9986-4c5e-a87f-69f9c237a066"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.366545 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54eba203-c984-4df6-91bc-ba04e655e541-kube-api-access-4jctc" (OuterVolumeSpecName: "kube-api-access-4jctc") pod "54eba203-c984-4df6-91bc-ba04e655e541" (UID: "54eba203-c984-4df6-91bc-ba04e655e541"). InnerVolumeSpecName "kube-api-access-4jctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.367047 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-kube-api-access-98zqv" (OuterVolumeSpecName: "kube-api-access-98zqv") pod "eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" (UID: "eb0df655-cdf4-4a30-bd80-2c6ac270d5b8"). InnerVolumeSpecName "kube-api-access-98zqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.367141 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad68a9ea-9986-4c5e-a87f-69f9c237a066-kube-api-access-x5kns" (OuterVolumeSpecName: "kube-api-access-x5kns") pod "ad68a9ea-9986-4c5e-a87f-69f9c237a066" (UID: "ad68a9ea-9986-4c5e-a87f-69f9c237a066"). InnerVolumeSpecName "kube-api-access-x5kns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.381538 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" (UID: "eb0df655-cdf4-4a30-bd80-2c6ac270d5b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.423742 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad68a9ea-9986-4c5e-a87f-69f9c237a066" (UID: "ad68a9ea-9986-4c5e-a87f-69f9c237a066"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460209 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460251 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460268 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460284 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jctc\" (UniqueName: \"kubernetes.io/projected/54eba203-c984-4df6-91bc-ba04e655e541-kube-api-access-4jctc\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460301 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460317 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98zqv\" (UniqueName: \"kubernetes.io/projected/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8-kube-api-access-98zqv\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460330 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5kns\" (UniqueName: \"kubernetes.io/projected/ad68a9ea-9986-4c5e-a87f-69f9c237a066-kube-api-access-x5kns\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.460342 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad68a9ea-9986-4c5e-a87f-69f9c237a066-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.480517 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54eba203-c984-4df6-91bc-ba04e655e541" (UID: "54eba203-c984-4df6-91bc-ba04e655e541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:14:36 crc kubenswrapper[4846]: I1201 00:14:36.561442 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54eba203-c984-4df6-91bc-ba04e655e541-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.011371 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" event={"ID":"ffcd812b-4d10-421c-a280-abb861c27dac","Type":"ContainerStarted","Data":"6d0220625e593d648a407e17b69b8f159e31f53601dc5da02c84b20883364742"} Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.011437 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" event={"ID":"ffcd812b-4d10-421c-a280-abb861c27dac","Type":"ContainerStarted","Data":"fceb6e0902920c2cea7e37b8c81e00938c170013c4291a117763f41f8598cb9f"} Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.012195 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.021068 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" event={"ID":"e31602df-d2bc-40de-93be-42600c22a9c1","Type":"ContainerDied","Data":"1ce7e3feaf70425ba65d9e6e0918ef409433044bbb976b2310b07ed91b36c21a"} Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.021393 4846 scope.go:117] "RemoveContainer" containerID="fb256a79867e152329e636d00b10b6928740a801fd49f3320e5d2338f97c013f" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.021631 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6rhj6" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.025883 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w968z" event={"ID":"ad68a9ea-9986-4c5e-a87f-69f9c237a066","Type":"ContainerDied","Data":"e89bedf594e477d24eae97fd6a5904f530932c385250810ece81ce7b81f1f24e"} Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.025989 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w968z" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.035459 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8j266" event={"ID":"54eba203-c984-4df6-91bc-ba04e655e541","Type":"ContainerDied","Data":"3bbe9a51eb958c2327c75d2aa6e924c01c8d10d70bea1cec6ef449ccbafb8755"} Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.035620 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8j266" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.043777 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdf9f" event={"ID":"eb0df655-cdf4-4a30-bd80-2c6ac270d5b8","Type":"ContainerDied","Data":"c7ca102c7cea765e8c1351a3a00d516c890def3a37f95aea12c06272569244c3"} Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.044011 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdf9f" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.051367 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" podStartSLOduration=2.051342901 podStartE2EDuration="2.051342901s" podCreationTimestamp="2025-12-01 00:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:14:37.04930684 +0000 UTC m=+497.830075914" watchObservedRunningTime="2025-12-01 00:14:37.051342901 +0000 UTC m=+497.832111975" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.065498 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pv8kk" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.080986 4846 scope.go:117] "RemoveContainer" containerID="bed8de3152113eccad10a290f691d45faf809d7b44c90a61b8355d777f9420c0" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.112423 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8j266"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.115981 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8j266"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.122773 4846 scope.go:117] "RemoveContainer" containerID="75cd310a4964dc2a4c149922f563428dff460bbd627b6d621138b092ee12e083" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.155895 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdf9f"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.169905 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdf9f"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.174987 4846 scope.go:117] "RemoveContainer" containerID="1d65eb3ec102d08516588790dd3d7551eaa5c0f6f63efed7dd36187a06844117" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.189759 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6rhj6"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.195050 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6rhj6"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.205379 4846 scope.go:117] "RemoveContainer" containerID="d631510b16c37b023cafbca2e96442c3415816af2cf2b3fa67ccf3aa9a8a1d7a" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.207110 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w968z"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.212644 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w968z"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.224836 4846 scope.go:117] "RemoveContainer" containerID="43dc27abd9f8bc32e13187d24c38b4d4b91dcb8b9a85fa408e6589bdcb19bcf4" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.246958 4846 scope.go:117] "RemoveContainer" containerID="3ee859906487fc5c040fda17e43efc2ca2c8d01bfa960f97e0cf1f59d701a1c8" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.266650 4846 scope.go:117] "RemoveContainer" containerID="da7824b7e0cabaf9009a60b2bec6b65a2bb5f10dfe29c45c10aaedbf73a598ea" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.283970 4846 scope.go:117] "RemoveContainer" containerID="189ac41019b342ce1145cdcf53bc70a4ba074d01f2323eed0fd143396590f74f" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.301760 4846 scope.go:117] "RemoveContainer" containerID="11debf85cd1a6c90cf566131dd2001d5ce58d6bac5b3e811e86a911ca29619ba" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.589601 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" path="/var/lib/kubelet/pods/02d89d24-0fd5-41c1-a392-27a63409d1c3/volumes" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.590856 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54eba203-c984-4df6-91bc-ba04e655e541" path="/var/lib/kubelet/pods/54eba203-c984-4df6-91bc-ba04e655e541/volumes" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.591461 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" path="/var/lib/kubelet/pods/ad68a9ea-9986-4c5e-a87f-69f9c237a066/volumes" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.592610 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" path="/var/lib/kubelet/pods/e31602df-d2bc-40de-93be-42600c22a9c1/volumes" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.593163 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" path="/var/lib/kubelet/pods/eb0df655-cdf4-4a30-bd80-2c6ac270d5b8/volumes" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.655867 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9l658"] Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656144 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656159 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656172 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656181 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656193 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656199 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656209 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656233 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656241 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656248 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656256 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656263 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656272 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656279 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656287 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656294 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656308 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656315 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656325 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656333 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656345 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656353 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656384 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656392 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656400 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656407 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="extract-utilities" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656417 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656424 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: E1201 00:14:37.656437 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656443 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="extract-content" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656546 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="54eba203-c984-4df6-91bc-ba04e655e541" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656556 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0df655-cdf4-4a30-bd80-2c6ac270d5b8" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656564 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656575 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d89d24-0fd5-41c1-a392-27a63409d1c3" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656586 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad68a9ea-9986-4c5e-a87f-69f9c237a066" containerName="registry-server" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656593 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.656601 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="e31602df-d2bc-40de-93be-42600c22a9c1" containerName="marketplace-operator" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.657487 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.659969 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.666614 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9l658"] Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.778952 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwdk\" (UniqueName: \"kubernetes.io/projected/a007a56c-cd63-4539-9315-a48129a2f363-kube-api-access-wxwdk\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.779082 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a007a56c-cd63-4539-9315-a48129a2f363-catalog-content\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.779121 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a007a56c-cd63-4539-9315-a48129a2f363-utilities\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.880678 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a007a56c-cd63-4539-9315-a48129a2f363-catalog-content\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.880777 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a007a56c-cd63-4539-9315-a48129a2f363-utilities\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.880849 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwdk\" (UniqueName: \"kubernetes.io/projected/a007a56c-cd63-4539-9315-a48129a2f363-kube-api-access-wxwdk\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.881227 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a007a56c-cd63-4539-9315-a48129a2f363-utilities\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.881227 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a007a56c-cd63-4539-9315-a48129a2f363-catalog-content\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.902563 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwdk\" (UniqueName: \"kubernetes.io/projected/a007a56c-cd63-4539-9315-a48129a2f363-kube-api-access-wxwdk\") pod \"certified-operators-9l658\" (UID: \"a007a56c-cd63-4539-9315-a48129a2f363\") " pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:37 crc kubenswrapper[4846]: I1201 00:14:37.991864 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:38 crc kubenswrapper[4846]: I1201 00:14:38.410315 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9l658"] Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.062242 4846 generic.go:334] "Generic (PLEG): container finished" podID="a007a56c-cd63-4539-9315-a48129a2f363" containerID="0beab19fa0b94d54e257f4a4463dfe20f30376e07fbea75fa027b2a4b7ef3265" exitCode=0 Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.062354 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9l658" event={"ID":"a007a56c-cd63-4539-9315-a48129a2f363","Type":"ContainerDied","Data":"0beab19fa0b94d54e257f4a4463dfe20f30376e07fbea75fa027b2a4b7ef3265"} Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.062415 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9l658" event={"ID":"a007a56c-cd63-4539-9315-a48129a2f363","Type":"ContainerStarted","Data":"a1a7e87c3b3cd346ec23b7d9675f06adb438665a6d061607403d0e9182a60363"} Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.064955 4846 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.468129 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dcz9z"] Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.470716 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.470845 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcz9z"] Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.473571 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.606000 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99sr5\" (UniqueName: \"kubernetes.io/projected/d6de706a-4254-4a83-a9dd-b5ce1da2b907-kube-api-access-99sr5\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.606529 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6de706a-4254-4a83-a9dd-b5ce1da2b907-catalog-content\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.606777 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6de706a-4254-4a83-a9dd-b5ce1da2b907-utilities\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.712900 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99sr5\" (UniqueName: \"kubernetes.io/projected/d6de706a-4254-4a83-a9dd-b5ce1da2b907-kube-api-access-99sr5\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.712964 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6de706a-4254-4a83-a9dd-b5ce1da2b907-catalog-content\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.713039 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6de706a-4254-4a83-a9dd-b5ce1da2b907-utilities\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.713638 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6de706a-4254-4a83-a9dd-b5ce1da2b907-catalog-content\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.713658 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6de706a-4254-4a83-a9dd-b5ce1da2b907-utilities\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.736935 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99sr5\" (UniqueName: \"kubernetes.io/projected/d6de706a-4254-4a83-a9dd-b5ce1da2b907-kube-api-access-99sr5\") pod \"community-operators-dcz9z\" (UID: \"d6de706a-4254-4a83-a9dd-b5ce1da2b907\") " pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:39 crc kubenswrapper[4846]: I1201 00:14:39.789975 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.054785 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-565r9"] Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.056151 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.058350 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.074084 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-565r9"] Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.082309 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9l658" event={"ID":"a007a56c-cd63-4539-9315-a48129a2f363","Type":"ContainerStarted","Data":"c64397feefc3fc90b190ce5cde239811cdefe700d662109d8a4861fef5620283"} Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.116747 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97155d55-621b-4102-9ec2-1257be3341d7-utilities\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.117214 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97155d55-621b-4102-9ec2-1257be3341d7-catalog-content\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.117351 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhs6\" (UniqueName: \"kubernetes.io/projected/97155d55-621b-4102-9ec2-1257be3341d7-kube-api-access-jmhs6\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.199079 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcz9z"] Dec 01 00:14:40 crc kubenswrapper[4846]: W1201 00:14:40.209284 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6de706a_4254_4a83_a9dd_b5ce1da2b907.slice/crio-6015cf9a384e94519cb23e0d1a772c457b36d8df1dc314161ad49f6aa1539291 WatchSource:0}: Error finding container 6015cf9a384e94519cb23e0d1a772c457b36d8df1dc314161ad49f6aa1539291: Status 404 returned error can't find the container with id 6015cf9a384e94519cb23e0d1a772c457b36d8df1dc314161ad49f6aa1539291 Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.218629 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97155d55-621b-4102-9ec2-1257be3341d7-utilities\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.218724 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97155d55-621b-4102-9ec2-1257be3341d7-catalog-content\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.218761 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhs6\" (UniqueName: \"kubernetes.io/projected/97155d55-621b-4102-9ec2-1257be3341d7-kube-api-access-jmhs6\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.219398 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97155d55-621b-4102-9ec2-1257be3341d7-catalog-content\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.219661 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97155d55-621b-4102-9ec2-1257be3341d7-utilities\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.241853 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhs6\" (UniqueName: \"kubernetes.io/projected/97155d55-621b-4102-9ec2-1257be3341d7-kube-api-access-jmhs6\") pod \"redhat-operators-565r9\" (UID: \"97155d55-621b-4102-9ec2-1257be3341d7\") " pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.376371 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:40 crc kubenswrapper[4846]: I1201 00:14:40.798383 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-565r9"] Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.091258 4846 generic.go:334] "Generic (PLEG): container finished" podID="a007a56c-cd63-4539-9315-a48129a2f363" containerID="c64397feefc3fc90b190ce5cde239811cdefe700d662109d8a4861fef5620283" exitCode=0 Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.091328 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9l658" event={"ID":"a007a56c-cd63-4539-9315-a48129a2f363","Type":"ContainerDied","Data":"c64397feefc3fc90b190ce5cde239811cdefe700d662109d8a4861fef5620283"} Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.094285 4846 generic.go:334] "Generic (PLEG): container finished" podID="d6de706a-4254-4a83-a9dd-b5ce1da2b907" containerID="15c078daa12e6ae3f99f0e2a33d3d53d1223098e3d0571e98332dc5fb3ec2a49" exitCode=0 Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.094368 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcz9z" event={"ID":"d6de706a-4254-4a83-a9dd-b5ce1da2b907","Type":"ContainerDied","Data":"15c078daa12e6ae3f99f0e2a33d3d53d1223098e3d0571e98332dc5fb3ec2a49"} Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.095538 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcz9z" event={"ID":"d6de706a-4254-4a83-a9dd-b5ce1da2b907","Type":"ContainerStarted","Data":"6015cf9a384e94519cb23e0d1a772c457b36d8df1dc314161ad49f6aa1539291"} Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.098999 4846 generic.go:334] "Generic (PLEG): container finished" podID="97155d55-621b-4102-9ec2-1257be3341d7" containerID="d25e8b1b341dd6bbc7da440ab8530e72e0a30a7fe66279939f62337a5ee6c0d4" exitCode=0 Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.099338 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-565r9" event={"ID":"97155d55-621b-4102-9ec2-1257be3341d7","Type":"ContainerDied","Data":"d25e8b1b341dd6bbc7da440ab8530e72e0a30a7fe66279939f62337a5ee6c0d4"} Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.099358 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-565r9" event={"ID":"97155d55-621b-4102-9ec2-1257be3341d7","Type":"ContainerStarted","Data":"e528a9f6cd9cdb1f44fb354b043f196d01bde36aabd0b5d200de343931f2b9db"} Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.849976 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkjg4"] Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.851364 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.855640 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.863843 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkjg4"] Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.940316 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-catalog-content\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.940669 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-utilities\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:41 crc kubenswrapper[4846]: I1201 00:14:41.940982 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4r8q\" (UniqueName: \"kubernetes.io/projected/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-kube-api-access-n4r8q\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.042484 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-catalog-content\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.042586 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-utilities\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.042723 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4r8q\" (UniqueName: \"kubernetes.io/projected/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-kube-api-access-n4r8q\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.043208 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-catalog-content\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.043228 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-utilities\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.067426 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4r8q\" (UniqueName: \"kubernetes.io/projected/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-kube-api-access-n4r8q\") pod \"redhat-marketplace-kkjg4\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.107861 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcz9z" event={"ID":"d6de706a-4254-4a83-a9dd-b5ce1da2b907","Type":"ContainerStarted","Data":"48921daba078ee6b9068e64fbd3b32359d2b517df0391d098ffb3bfedcdc49a4"} Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.111907 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-565r9" event={"ID":"97155d55-621b-4102-9ec2-1257be3341d7","Type":"ContainerStarted","Data":"9b7a4ef29eeb10505cfb4ce77efd281b5539fee607c894618d3d842bfb45620e"} Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.114913 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9l658" event={"ID":"a007a56c-cd63-4539-9315-a48129a2f363","Type":"ContainerStarted","Data":"f2cf1c874de1eaf9a949b991d90787c2a2bd080bc5129b6bbb97d784261d3cce"} Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.164373 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.166312 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9l658" podStartSLOduration=2.672702143 podStartE2EDuration="5.166296478s" podCreationTimestamp="2025-12-01 00:14:37 +0000 UTC" firstStartedPulling="2025-12-01 00:14:39.064453736 +0000 UTC m=+499.845222810" lastFinishedPulling="2025-12-01 00:14:41.558048071 +0000 UTC m=+502.338817145" observedRunningTime="2025-12-01 00:14:42.161858137 +0000 UTC m=+502.942627221" watchObservedRunningTime="2025-12-01 00:14:42.166296478 +0000 UTC m=+502.947065552" Dec 01 00:14:42 crc kubenswrapper[4846]: I1201 00:14:42.595113 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkjg4"] Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.125561 4846 generic.go:334] "Generic (PLEG): container finished" podID="d6de706a-4254-4a83-a9dd-b5ce1da2b907" containerID="48921daba078ee6b9068e64fbd3b32359d2b517df0391d098ffb3bfedcdc49a4" exitCode=0 Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.125675 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcz9z" event={"ID":"d6de706a-4254-4a83-a9dd-b5ce1da2b907","Type":"ContainerDied","Data":"48921daba078ee6b9068e64fbd3b32359d2b517df0391d098ffb3bfedcdc49a4"} Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.128726 4846 generic.go:334] "Generic (PLEG): container finished" podID="97155d55-621b-4102-9ec2-1257be3341d7" containerID="9b7a4ef29eeb10505cfb4ce77efd281b5539fee607c894618d3d842bfb45620e" exitCode=0 Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.128842 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-565r9" event={"ID":"97155d55-621b-4102-9ec2-1257be3341d7","Type":"ContainerDied","Data":"9b7a4ef29eeb10505cfb4ce77efd281b5539fee607c894618d3d842bfb45620e"} Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.133141 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerID="cbf4c831cda2e0cc44628a820f1310c36311b0990db091236f394107c86a5b58" exitCode=0 Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.134099 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerDied","Data":"cbf4c831cda2e0cc44628a820f1310c36311b0990db091236f394107c86a5b58"} Dec 01 00:14:43 crc kubenswrapper[4846]: I1201 00:14:43.134158 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerStarted","Data":"3c5378cffb5784f5f430e1345dc7f538bb232c989d9ae66e5a2851b2d5ec55a0"} Dec 01 00:14:44 crc kubenswrapper[4846]: I1201 00:14:44.139152 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-565r9" event={"ID":"97155d55-621b-4102-9ec2-1257be3341d7","Type":"ContainerStarted","Data":"ed5b701c964b19b942605ff6214dd49502c39549cea814c9acee45b6721401a6"} Dec 01 00:14:44 crc kubenswrapper[4846]: I1201 00:14:44.164122 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-565r9" podStartSLOduration=1.7131108990000001 podStartE2EDuration="4.164105243s" podCreationTimestamp="2025-12-01 00:14:40 +0000 UTC" firstStartedPulling="2025-12-01 00:14:41.102723809 +0000 UTC m=+501.883492883" lastFinishedPulling="2025-12-01 00:14:43.553718153 +0000 UTC m=+504.334487227" observedRunningTime="2025-12-01 00:14:44.163152544 +0000 UTC m=+504.943921618" watchObservedRunningTime="2025-12-01 00:14:44.164105243 +0000 UTC m=+504.944874317" Dec 01 00:14:45 crc kubenswrapper[4846]: I1201 00:14:45.147595 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerStarted","Data":"c3a7aea00d06204d7fca285a6ce565e0bcfa13931f1b12fae8bb4b05d936f484"} Dec 01 00:14:45 crc kubenswrapper[4846]: I1201 00:14:45.151284 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcz9z" event={"ID":"d6de706a-4254-4a83-a9dd-b5ce1da2b907","Type":"ContainerStarted","Data":"6e5eabdf48a23c8c870ba0a7d5a2a8136d98d171800a3077d2d2d08545cb0540"} Dec 01 00:14:45 crc kubenswrapper[4846]: I1201 00:14:45.196694 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dcz9z" podStartSLOduration=3.652539518 podStartE2EDuration="6.196642188s" podCreationTimestamp="2025-12-01 00:14:39 +0000 UTC" firstStartedPulling="2025-12-01 00:14:41.098374972 +0000 UTC m=+501.879144046" lastFinishedPulling="2025-12-01 00:14:43.642477642 +0000 UTC m=+504.423246716" observedRunningTime="2025-12-01 00:14:45.196102902 +0000 UTC m=+505.976871986" watchObservedRunningTime="2025-12-01 00:14:45.196642188 +0000 UTC m=+505.977411262" Dec 01 00:14:45 crc kubenswrapper[4846]: E1201 00:14:45.301224 4846 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4dea1cf_d4a8_4f59_b662_3f73f2c51672.slice/crio-c3a7aea00d06204d7fca285a6ce565e0bcfa13931f1b12fae8bb4b05d936f484.scope\": RecentStats: unable to find data in memory cache]" Dec 01 00:14:46 crc kubenswrapper[4846]: I1201 00:14:46.156979 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerID="c3a7aea00d06204d7fca285a6ce565e0bcfa13931f1b12fae8bb4b05d936f484" exitCode=0 Dec 01 00:14:46 crc kubenswrapper[4846]: I1201 00:14:46.158422 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerDied","Data":"c3a7aea00d06204d7fca285a6ce565e0bcfa13931f1b12fae8bb4b05d936f484"} Dec 01 00:14:47 crc kubenswrapper[4846]: I1201 00:14:47.992660 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:47 crc kubenswrapper[4846]: I1201 00:14:47.993004 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:48 crc kubenswrapper[4846]: I1201 00:14:48.038537 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:48 crc kubenswrapper[4846]: I1201 00:14:48.169954 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerStarted","Data":"2196f46f38fef7617ea6605c43660ec5a71069c5e8ac744625f8cfa0b3c6d76d"} Dec 01 00:14:48 crc kubenswrapper[4846]: I1201 00:14:48.189538 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkjg4" podStartSLOduration=3.1794729840000002 podStartE2EDuration="7.189519838s" podCreationTimestamp="2025-12-01 00:14:41 +0000 UTC" firstStartedPulling="2025-12-01 00:14:43.135301266 +0000 UTC m=+503.916070340" lastFinishedPulling="2025-12-01 00:14:47.14534812 +0000 UTC m=+507.926117194" observedRunningTime="2025-12-01 00:14:48.188432096 +0000 UTC m=+508.969201170" watchObservedRunningTime="2025-12-01 00:14:48.189519838 +0000 UTC m=+508.970288912" Dec 01 00:14:48 crc kubenswrapper[4846]: I1201 00:14:48.224108 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9l658" Dec 01 00:14:49 crc kubenswrapper[4846]: I1201 00:14:49.790940 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:49 crc kubenswrapper[4846]: I1201 00:14:49.791344 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:49 crc kubenswrapper[4846]: I1201 00:14:49.836988 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:50 crc kubenswrapper[4846]: I1201 00:14:50.222915 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dcz9z" Dec 01 00:14:50 crc kubenswrapper[4846]: I1201 00:14:50.376659 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:50 crc kubenswrapper[4846]: I1201 00:14:50.377245 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:50 crc kubenswrapper[4846]: I1201 00:14:50.426944 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:51 crc kubenswrapper[4846]: I1201 00:14:51.231663 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-565r9" Dec 01 00:14:52 crc kubenswrapper[4846]: I1201 00:14:52.165080 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:52 crc kubenswrapper[4846]: I1201 00:14:52.165194 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:52 crc kubenswrapper[4846]: I1201 00:14:52.205995 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:14:52 crc kubenswrapper[4846]: I1201 00:14:52.249797 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.163779 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g"] Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.165134 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.167300 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.167574 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.174641 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g"] Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.298541 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-config-volume\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.298640 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-secret-volume\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.298928 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wjz\" (UniqueName: \"kubernetes.io/projected/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-kube-api-access-k4wjz\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.400355 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-config-volume\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.400416 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-secret-volume\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.400488 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wjz\" (UniqueName: \"kubernetes.io/projected/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-kube-api-access-k4wjz\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.401287 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-config-volume\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.405524 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-secret-volume\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.430274 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wjz\" (UniqueName: \"kubernetes.io/projected/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-kube-api-access-k4wjz\") pod \"collect-profiles-29409135-hnx9g\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.486596 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:00 crc kubenswrapper[4846]: I1201 00:15:00.885291 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g"] Dec 01 00:15:00 crc kubenswrapper[4846]: W1201 00:15:00.893879 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d2add66_bbcc_492c_8a30_58df3eb2dcd1.slice/crio-037d385336ebcdd5dfc4425f4f3f612b713b892bcaee3e14c738982c2ce387cc WatchSource:0}: Error finding container 037d385336ebcdd5dfc4425f4f3f612b713b892bcaee3e14c738982c2ce387cc: Status 404 returned error can't find the container with id 037d385336ebcdd5dfc4425f4f3f612b713b892bcaee3e14c738982c2ce387cc Dec 01 00:15:01 crc kubenswrapper[4846]: I1201 00:15:01.239822 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" event={"ID":"4d2add66-bbcc-492c-8a30-58df3eb2dcd1","Type":"ContainerStarted","Data":"254a37569995a75a668a3f86c47d7272c3878bbe4e1229a0986f87c416356cc7"} Dec 01 00:15:01 crc kubenswrapper[4846]: I1201 00:15:01.240154 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" event={"ID":"4d2add66-bbcc-492c-8a30-58df3eb2dcd1","Type":"ContainerStarted","Data":"037d385336ebcdd5dfc4425f4f3f612b713b892bcaee3e14c738982c2ce387cc"} Dec 01 00:15:01 crc kubenswrapper[4846]: I1201 00:15:01.259440 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" podStartSLOduration=1.2594209140000001 podStartE2EDuration="1.259420914s" podCreationTimestamp="2025-12-01 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:15:01.254758405 +0000 UTC m=+522.035527479" watchObservedRunningTime="2025-12-01 00:15:01.259420914 +0000 UTC m=+522.040189998" Dec 01 00:15:02 crc kubenswrapper[4846]: I1201 00:15:02.247738 4846 generic.go:334] "Generic (PLEG): container finished" podID="4d2add66-bbcc-492c-8a30-58df3eb2dcd1" containerID="254a37569995a75a668a3f86c47d7272c3878bbe4e1229a0986f87c416356cc7" exitCode=0 Dec 01 00:15:02 crc kubenswrapper[4846]: I1201 00:15:02.247791 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" event={"ID":"4d2add66-bbcc-492c-8a30-58df3eb2dcd1","Type":"ContainerDied","Data":"254a37569995a75a668a3f86c47d7272c3878bbe4e1229a0986f87c416356cc7"} Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.548945 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.647792 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-secret-volume\") pod \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.647862 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wjz\" (UniqueName: \"kubernetes.io/projected/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-kube-api-access-k4wjz\") pod \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.647914 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-config-volume\") pod \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\" (UID: \"4d2add66-bbcc-492c-8a30-58df3eb2dcd1\") " Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.649704 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d2add66-bbcc-492c-8a30-58df3eb2dcd1" (UID: "4d2add66-bbcc-492c-8a30-58df3eb2dcd1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.656867 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d2add66-bbcc-492c-8a30-58df3eb2dcd1" (UID: "4d2add66-bbcc-492c-8a30-58df3eb2dcd1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.670212 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-kube-api-access-k4wjz" (OuterVolumeSpecName: "kube-api-access-k4wjz") pod "4d2add66-bbcc-492c-8a30-58df3eb2dcd1" (UID: "4d2add66-bbcc-492c-8a30-58df3eb2dcd1"). InnerVolumeSpecName "kube-api-access-k4wjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.749550 4846 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.749596 4846 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:15:03 crc kubenswrapper[4846]: I1201 00:15:03.749605 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4wjz\" (UniqueName: \"kubernetes.io/projected/4d2add66-bbcc-492c-8a30-58df3eb2dcd1-kube-api-access-k4wjz\") on node \"crc\" DevicePath \"\"" Dec 01 00:15:04 crc kubenswrapper[4846]: I1201 00:15:04.262188 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" event={"ID":"4d2add66-bbcc-492c-8a30-58df3eb2dcd1","Type":"ContainerDied","Data":"037d385336ebcdd5dfc4425f4f3f612b713b892bcaee3e14c738982c2ce387cc"} Dec 01 00:15:04 crc kubenswrapper[4846]: I1201 00:15:04.262247 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="037d385336ebcdd5dfc4425f4f3f612b713b892bcaee3e14c738982c2ce387cc" Dec 01 00:15:04 crc kubenswrapper[4846]: I1201 00:15:04.262260 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409135-hnx9g" Dec 01 00:15:55 crc kubenswrapper[4846]: I1201 00:15:55.419859 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:15:55 crc kubenswrapper[4846]: I1201 00:15:55.420597 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:16:25 crc kubenswrapper[4846]: I1201 00:16:25.420640 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:16:25 crc kubenswrapper[4846]: I1201 00:16:25.421721 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:16:55 crc kubenswrapper[4846]: I1201 00:16:55.420287 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:16:55 crc kubenswrapper[4846]: I1201 00:16:55.421176 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:16:55 crc kubenswrapper[4846]: I1201 00:16:55.421254 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:16:55 crc kubenswrapper[4846]: I1201 00:16:55.422194 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf97e1048be4b0c031fde4887c47a3c0d4fc2d3018cc03b3d309a8d4b1baba7c"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:16:55 crc kubenswrapper[4846]: I1201 00:16:55.422308 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://bf97e1048be4b0c031fde4887c47a3c0d4fc2d3018cc03b3d309a8d4b1baba7c" gracePeriod=600 Dec 01 00:16:56 crc kubenswrapper[4846]: I1201 00:16:56.070874 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="bf97e1048be4b0c031fde4887c47a3c0d4fc2d3018cc03b3d309a8d4b1baba7c" exitCode=0 Dec 01 00:16:56 crc kubenswrapper[4846]: I1201 00:16:56.070972 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"bf97e1048be4b0c031fde4887c47a3c0d4fc2d3018cc03b3d309a8d4b1baba7c"} Dec 01 00:16:56 crc kubenswrapper[4846]: I1201 00:16:56.071403 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"102b53cf93c8c6c4ec883c1482afd13bc556f199bcc1fe562a732d196e301581"} Dec 01 00:16:56 crc kubenswrapper[4846]: I1201 00:16:56.071443 4846 scope.go:117] "RemoveContainer" containerID="3ac2e5c683905e3f4d0a34f11ca9603ade698a0381b398171743ea10eb159b79" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.044861 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nhp4q"] Dec 01 00:18:44 crc kubenswrapper[4846]: E1201 00:18:44.045659 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2add66-bbcc-492c-8a30-58df3eb2dcd1" containerName="collect-profiles" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.045675 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2add66-bbcc-492c-8a30-58df3eb2dcd1" containerName="collect-profiles" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.045808 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2add66-bbcc-492c-8a30-58df3eb2dcd1" containerName="collect-profiles" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.046273 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.067568 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nhp4q"] Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.129939 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a48fbded-7b3c-488b-8589-ebd99b7eb971-registry-certificates\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130002 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a48fbded-7b3c-488b-8589-ebd99b7eb971-trusted-ca\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130026 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a48fbded-7b3c-488b-8589-ebd99b7eb971-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130056 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130073 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-bound-sa-token\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130100 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a48fbded-7b3c-488b-8589-ebd99b7eb971-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130126 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krjk2\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-kube-api-access-krjk2\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.130274 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-registry-tls\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.159884 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231044 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a48fbded-7b3c-488b-8589-ebd99b7eb971-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231096 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krjk2\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-kube-api-access-krjk2\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231133 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-registry-tls\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231174 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a48fbded-7b3c-488b-8589-ebd99b7eb971-registry-certificates\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231205 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a48fbded-7b3c-488b-8589-ebd99b7eb971-trusted-ca\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231237 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a48fbded-7b3c-488b-8589-ebd99b7eb971-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.231267 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-bound-sa-token\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.232003 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a48fbded-7b3c-488b-8589-ebd99b7eb971-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.232565 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a48fbded-7b3c-488b-8589-ebd99b7eb971-trusted-ca\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.232806 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a48fbded-7b3c-488b-8589-ebd99b7eb971-registry-certificates\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.237423 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-registry-tls\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.238542 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a48fbded-7b3c-488b-8589-ebd99b7eb971-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.248282 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krjk2\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-kube-api-access-krjk2\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.248741 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a48fbded-7b3c-488b-8589-ebd99b7eb971-bound-sa-token\") pod \"image-registry-66df7c8f76-nhp4q\" (UID: \"a48fbded-7b3c-488b-8589-ebd99b7eb971\") " pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.370771 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.575197 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nhp4q"] Dec 01 00:18:44 crc kubenswrapper[4846]: I1201 00:18:44.715915 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" event={"ID":"a48fbded-7b3c-488b-8589-ebd99b7eb971","Type":"ContainerStarted","Data":"a5a9059bd7ee33023106f6e431dfe8a9886fe6e15a6b4fc9809cd38bc570726f"} Dec 01 00:18:45 crc kubenswrapper[4846]: I1201 00:18:45.721725 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" event={"ID":"a48fbded-7b3c-488b-8589-ebd99b7eb971","Type":"ContainerStarted","Data":"2aadbd9655382921c1ecd1c4f1cf340d9a9687fa2be0640db975bab633e8cc5f"} Dec 01 00:18:45 crc kubenswrapper[4846]: I1201 00:18:45.722052 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:18:45 crc kubenswrapper[4846]: I1201 00:18:45.741456 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" podStartSLOduration=1.741435742 podStartE2EDuration="1.741435742s" podCreationTimestamp="2025-12-01 00:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:18:45.737987044 +0000 UTC m=+746.518756128" watchObservedRunningTime="2025-12-01 00:18:45.741435742 +0000 UTC m=+746.522204816" Dec 01 00:18:55 crc kubenswrapper[4846]: I1201 00:18:55.419954 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:18:55 crc kubenswrapper[4846]: I1201 00:18:55.420547 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:18:58 crc kubenswrapper[4846]: I1201 00:18:58.440633 4846 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 01 00:19:04 crc kubenswrapper[4846]: I1201 00:19:04.378156 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nhp4q" Dec 01 00:19:04 crc kubenswrapper[4846]: I1201 00:19:04.451034 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbsfs"] Dec 01 00:19:25 crc kubenswrapper[4846]: I1201 00:19:25.420170 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:19:25 crc kubenswrapper[4846]: I1201 00:19:25.420786 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:19:29 crc kubenswrapper[4846]: I1201 00:19:29.507010 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" podUID="8ae23581-006a-44dd-aae2-d85d847dda2e" containerName="registry" containerID="cri-o://7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.075206 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.251105 4846 generic.go:334] "Generic (PLEG): container finished" podID="8ae23581-006a-44dd-aae2-d85d847dda2e" containerID="7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d" exitCode=0 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.251156 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" event={"ID":"8ae23581-006a-44dd-aae2-d85d847dda2e","Type":"ContainerDied","Data":"7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d"} Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.251187 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" event={"ID":"8ae23581-006a-44dd-aae2-d85d847dda2e","Type":"ContainerDied","Data":"41531b79fac5fe433039a9a4870b881af24f544e789c1f3bc326ed058cce73d1"} Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.251209 4846 scope.go:117] "RemoveContainer" containerID="7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.251208 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbsfs" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.255157 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-tls\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.255221 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-bound-sa-token\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.255289 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-certificates\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.255352 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ae23581-006a-44dd-aae2-d85d847dda2e-installation-pull-secrets\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.255372 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvwhh\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-kube-api-access-dvwhh\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.256140 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.256286 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-trusted-ca\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.256318 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ae23581-006a-44dd-aae2-d85d847dda2e-ca-trust-extracted\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.256454 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8ae23581-006a-44dd-aae2-d85d847dda2e\" (UID: \"8ae23581-006a-44dd-aae2-d85d847dda2e\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.257137 4846 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.257799 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.265326 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ae23581-006a-44dd-aae2-d85d847dda2e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.267073 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-kube-api-access-dvwhh" (OuterVolumeSpecName: "kube-api-access-dvwhh") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "kube-api-access-dvwhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.267382 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.267620 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.272694 4846 scope.go:117] "RemoveContainer" containerID="7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.272949 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.273335 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d\": container with ID starting with 7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d not found: ID does not exist" containerID="7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.273395 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d"} err="failed to get container status \"7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d\": rpc error: code = NotFound desc = could not find container \"7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d\": container with ID starting with 7ca8181d8b18808dd3cf58b444663f54fad6e44bcf9f7f940346e65c10c1ae7d not found: ID does not exist" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.275940 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ae23581-006a-44dd-aae2-d85d847dda2e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8ae23581-006a-44dd-aae2-d85d847dda2e" (UID: "8ae23581-006a-44dd-aae2-d85d847dda2e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.358964 4846 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ae23581-006a-44dd-aae2-d85d847dda2e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.359053 4846 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ae23581-006a-44dd-aae2-d85d847dda2e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.359068 4846 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.359079 4846 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.359094 4846 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ae23581-006a-44dd-aae2-d85d847dda2e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.359106 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvwhh\" (UniqueName: \"kubernetes.io/projected/8ae23581-006a-44dd-aae2-d85d847dda2e-kube-api-access-dvwhh\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.402972 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fpx9q"] Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403347 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-controller" containerID="cri-o://939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403385 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="nbdb" containerID="cri-o://abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403445 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-node" containerID="cri-o://4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403474 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-acl-logging" containerID="cri-o://974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403518 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="sbdb" containerID="cri-o://1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403522 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="northd" containerID="cri-o://7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.403498 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.443122 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" containerID="cri-o://f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" gracePeriod=30 Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.628158 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbsfs"] Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.632854 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbsfs"] Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.703913 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/3.log" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.706529 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovn-acl-logging/0.log" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.707413 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovn-controller/0.log" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.708061 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.764889 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-slash\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.764939 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-ovn\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.764964 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-bin\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.764984 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-openvswitch\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765017 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-systemd\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765040 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-node-log\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765029 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-slash" (OuterVolumeSpecName: "host-slash") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765058 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-ovn-kubernetes\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765129 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765146 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-etc-openvswitch\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765171 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765183 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-kubelet\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765192 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765213 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765227 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765248 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-log-socket\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765281 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765303 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765346 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-netd\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765390 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpt9\" (UniqueName: \"kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765463 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-netns\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765488 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-script-lib\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765542 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765595 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-var-lib-openvswitch\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.765642 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-systemd-units\") pod \"358371ac-c594-492b-98ad-0da4bc7d9d16\" (UID: \"358371ac-c594-492b-98ad-0da4bc7d9d16\") " Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766185 4846 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766203 4846 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766213 4846 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-slash\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766223 4846 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766256 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766278 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766291 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6chmp"] Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766529 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766545 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766554 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-acl-logging" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766561 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-acl-logging" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766573 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766582 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766591 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766599 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766609 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766615 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766625 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766632 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766645 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="nbdb" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766652 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="nbdb" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766662 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-node" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766669 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-node" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766698 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ae23581-006a-44dd-aae2-d85d847dda2e" containerName="registry" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766709 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ae23581-006a-44dd-aae2-d85d847dda2e" containerName="registry" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766722 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766730 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766740 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="northd" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766747 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="northd" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766756 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kubecfg-setup" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766763 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kubecfg-setup" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.766773 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="sbdb" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766779 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="sbdb" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766890 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="northd" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766904 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-acl-logging" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766916 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766926 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ae23581-006a-44dd-aae2-d85d847dda2e" containerName="registry" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766934 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="sbdb" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766943 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766951 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="nbdb" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766962 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-node" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766972 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="kube-rbac-proxy-ovn-metrics" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766981 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766988 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766997 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovn-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.767005 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: E1201 00:19:31.767115 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.767123 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerName="ovnkube-controller" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766298 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766714 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-log-socket" (OuterVolumeSpecName: "log-socket") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.766738 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.767881 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-node-log" (OuterVolumeSpecName: "node-log") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.767888 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.767939 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.768658 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.770126 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.770379 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.770421 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.770859 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.771915 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9" (OuterVolumeSpecName: "kube-api-access-2xpt9") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "kube-api-access-2xpt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.772929 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.782660 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "358371ac-c594-492b-98ad-0da4bc7d9d16" (UID: "358371ac-c594-492b-98ad-0da4bc7d9d16"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867381 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-systemd-units\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867437 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-kubelet\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867453 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-log-socket\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867591 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-ovn\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867753 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-systemd\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867824 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovnkube-config\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867922 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-var-lib-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867968 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.867996 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-env-overrides\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868021 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-node-log\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868047 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovn-node-metrics-cert\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868075 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-run-netns\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868099 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-slash\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868137 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-cni-netd\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868177 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-cni-bin\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868260 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-etc-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868300 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868481 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74nz\" (UniqueName: \"kubernetes.io/projected/c4f3ac75-6848-42b0-9508-ca561a5754f6-kube-api-access-k74nz\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868543 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovnkube-script-lib\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868616 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868703 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xpt9\" (UniqueName: \"kubernetes.io/projected/358371ac-c594-492b-98ad-0da4bc7d9d16-kube-api-access-2xpt9\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868730 4846 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868747 4846 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868760 4846 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868774 4846 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868785 4846 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868796 4846 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868808 4846 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868819 4846 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868830 4846 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868842 4846 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-node-log\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868851 4846 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868862 4846 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/358371ac-c594-492b-98ad-0da4bc7d9d16-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868871 4846 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-log-socket\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868884 4846 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/358371ac-c594-492b-98ad-0da4bc7d9d16-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.868894 4846 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/358371ac-c594-492b-98ad-0da4bc7d9d16-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969420 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74nz\" (UniqueName: \"kubernetes.io/projected/c4f3ac75-6848-42b0-9508-ca561a5754f6-kube-api-access-k74nz\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969483 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovnkube-script-lib\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969515 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969550 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-systemd-units\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969567 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-kubelet\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969584 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-log-socket\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969608 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-ovn\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969625 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-systemd\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969643 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovnkube-config\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969667 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-var-lib-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969712 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969734 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-env-overrides\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969754 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-node-log\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969754 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-ovn\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969817 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-log-socket\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969821 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969771 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovn-node-metrics-cert\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969854 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-var-lib-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969878 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-run-netns\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969887 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-systemd\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969897 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-slash\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969921 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-cni-netd\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969938 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-cni-bin\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969954 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-etc-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969978 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970083 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-run-ovn-kubernetes\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969809 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-kubelet\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970128 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-run-netns\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.969869 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-systemd-units\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970183 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-slash\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970178 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-run-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970216 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-cni-bin\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970255 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-node-log\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970234 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-etc-openvswitch\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970281 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4f3ac75-6848-42b0-9508-ca561a5754f6-host-cni-netd\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970291 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovnkube-script-lib\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970507 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-env-overrides\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.970789 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovnkube-config\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.973170 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c4f3ac75-6848-42b0-9508-ca561a5754f6-ovn-node-metrics-cert\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:31 crc kubenswrapper[4846]: I1201 00:19:31.985509 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74nz\" (UniqueName: \"kubernetes.io/projected/c4f3ac75-6848-42b0-9508-ca561a5754f6-kube-api-access-k74nz\") pod \"ovnkube-node-6chmp\" (UID: \"c4f3ac75-6848-42b0-9508-ca561a5754f6\") " pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.088280 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:32 crc kubenswrapper[4846]: W1201 00:19:32.121787 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f3ac75_6848_42b0_9508_ca561a5754f6.slice/crio-15a2460e60588a35dd49b2efe50c0f5afdf9859b3506f58d6a898ae10984d4a6 WatchSource:0}: Error finding container 15a2460e60588a35dd49b2efe50c0f5afdf9859b3506f58d6a898ae10984d4a6: Status 404 returned error can't find the container with id 15a2460e60588a35dd49b2efe50c0f5afdf9859b3506f58d6a898ae10984d4a6 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.260934 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/2.log" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.261425 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/1.log" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.261489 4846 generic.go:334] "Generic (PLEG): container finished" podID="607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c" containerID="cc4d60edb400f37047e3f32cabed2e42ffd672616820147beaed34545b87f90f" exitCode=2 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.261813 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerDied","Data":"cc4d60edb400f37047e3f32cabed2e42ffd672616820147beaed34545b87f90f"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.261894 4846 scope.go:117] "RemoveContainer" containerID="f0ec509878a22d6027bb614d44f48be55376865566184059d013c9f90bb7707f" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.263264 4846 scope.go:117] "RemoveContainer" containerID="cc4d60edb400f37047e3f32cabed2e42ffd672616820147beaed34545b87f90f" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.267716 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovnkube-controller/3.log" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.273973 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovn-acl-logging/0.log" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.275715 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fpx9q_358371ac-c594-492b-98ad-0da4bc7d9d16/ovn-controller/0.log" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276570 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" exitCode=0 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276629 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" exitCode=0 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276643 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" exitCode=0 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276660 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" exitCode=0 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276651 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276728 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276739 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276747 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276946 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276972 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.276669 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" exitCode=0 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277019 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" exitCode=0 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277041 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" exitCode=143 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277057 4846 generic.go:334] "Generic (PLEG): container finished" podID="358371ac-c594-492b-98ad-0da4bc7d9d16" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" exitCode=143 Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277124 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277196 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277216 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277229 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277236 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277242 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277249 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277255 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277261 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277268 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277274 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277293 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277303 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277310 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277316 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277322 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277328 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277335 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277340 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277347 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277355 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277361 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277369 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277378 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277385 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277394 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277401 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277407 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277413 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277419 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277425 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277433 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277440 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277450 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fpx9q" event={"ID":"358371ac-c594-492b-98ad-0da4bc7d9d16","Type":"ContainerDied","Data":"0d493dd81f93c5fbddc72234791d431ef70c361c190160174be22e84a64ecbe1"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277461 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277470 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277478 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277485 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277492 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277499 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277507 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277513 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277519 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.277524 4846 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.280434 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"15a2460e60588a35dd49b2efe50c0f5afdf9859b3506f58d6a898ae10984d4a6"} Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.335806 4846 scope.go:117] "RemoveContainer" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.336009 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fpx9q"] Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.342708 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fpx9q"] Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.357843 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.435284 4846 scope.go:117] "RemoveContainer" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.447849 4846 scope.go:117] "RemoveContainer" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.466287 4846 scope.go:117] "RemoveContainer" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.481308 4846 scope.go:117] "RemoveContainer" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.498084 4846 scope.go:117] "RemoveContainer" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.514036 4846 scope.go:117] "RemoveContainer" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.529032 4846 scope.go:117] "RemoveContainer" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.545099 4846 scope.go:117] "RemoveContainer" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.563983 4846 scope.go:117] "RemoveContainer" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.564496 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": container with ID starting with f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2 not found: ID does not exist" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.564525 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} err="failed to get container status \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": rpc error: code = NotFound desc = could not find container \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": container with ID starting with f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.564546 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.565137 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": container with ID starting with 10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59 not found: ID does not exist" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.565232 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} err="failed to get container status \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": rpc error: code = NotFound desc = could not find container \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": container with ID starting with 10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.565340 4846 scope.go:117] "RemoveContainer" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.565831 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": container with ID starting with 1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d not found: ID does not exist" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.565877 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} err="failed to get container status \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": rpc error: code = NotFound desc = could not find container \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": container with ID starting with 1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.565904 4846 scope.go:117] "RemoveContainer" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.566210 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": container with ID starting with abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a not found: ID does not exist" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.566249 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} err="failed to get container status \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": rpc error: code = NotFound desc = could not find container \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": container with ID starting with abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.566272 4846 scope.go:117] "RemoveContainer" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.566496 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": container with ID starting with 7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c not found: ID does not exist" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.566516 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} err="failed to get container status \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": rpc error: code = NotFound desc = could not find container \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": container with ID starting with 7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.566531 4846 scope.go:117] "RemoveContainer" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.566804 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": container with ID starting with c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e not found: ID does not exist" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.566824 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} err="failed to get container status \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": rpc error: code = NotFound desc = could not find container \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": container with ID starting with c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.566836 4846 scope.go:117] "RemoveContainer" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.567092 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": container with ID starting with 4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d not found: ID does not exist" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.567112 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} err="failed to get container status \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": rpc error: code = NotFound desc = could not find container \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": container with ID starting with 4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.567127 4846 scope.go:117] "RemoveContainer" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.567350 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": container with ID starting with 974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630 not found: ID does not exist" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.567379 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} err="failed to get container status \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": rpc error: code = NotFound desc = could not find container \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": container with ID starting with 974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.567391 4846 scope.go:117] "RemoveContainer" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.567899 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": container with ID starting with 939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487 not found: ID does not exist" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.568028 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} err="failed to get container status \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": rpc error: code = NotFound desc = could not find container \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": container with ID starting with 939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.568122 4846 scope.go:117] "RemoveContainer" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" Dec 01 00:19:32 crc kubenswrapper[4846]: E1201 00:19:32.568520 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": container with ID starting with 2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435 not found: ID does not exist" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.568640 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} err="failed to get container status \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": rpc error: code = NotFound desc = could not find container \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": container with ID starting with 2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.568756 4846 scope.go:117] "RemoveContainer" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.569231 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} err="failed to get container status \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": rpc error: code = NotFound desc = could not find container \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": container with ID starting with f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.569336 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.569803 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} err="failed to get container status \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": rpc error: code = NotFound desc = could not find container \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": container with ID starting with 10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.569835 4846 scope.go:117] "RemoveContainer" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.570163 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} err="failed to get container status \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": rpc error: code = NotFound desc = could not find container \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": container with ID starting with 1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.570243 4846 scope.go:117] "RemoveContainer" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.570591 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} err="failed to get container status \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": rpc error: code = NotFound desc = could not find container \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": container with ID starting with abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.570658 4846 scope.go:117] "RemoveContainer" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.571034 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} err="failed to get container status \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": rpc error: code = NotFound desc = could not find container \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": container with ID starting with 7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.571066 4846 scope.go:117] "RemoveContainer" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.571403 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} err="failed to get container status \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": rpc error: code = NotFound desc = could not find container \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": container with ID starting with c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.571425 4846 scope.go:117] "RemoveContainer" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.571716 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} err="failed to get container status \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": rpc error: code = NotFound desc = could not find container \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": container with ID starting with 4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.571735 4846 scope.go:117] "RemoveContainer" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.572272 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} err="failed to get container status \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": rpc error: code = NotFound desc = could not find container \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": container with ID starting with 974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.572353 4846 scope.go:117] "RemoveContainer" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.572879 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} err="failed to get container status \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": rpc error: code = NotFound desc = could not find container \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": container with ID starting with 939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.572959 4846 scope.go:117] "RemoveContainer" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.573320 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} err="failed to get container status \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": rpc error: code = NotFound desc = could not find container \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": container with ID starting with 2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.573343 4846 scope.go:117] "RemoveContainer" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.573839 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} err="failed to get container status \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": rpc error: code = NotFound desc = could not find container \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": container with ID starting with f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.573925 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.574380 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} err="failed to get container status \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": rpc error: code = NotFound desc = could not find container \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": container with ID starting with 10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.574481 4846 scope.go:117] "RemoveContainer" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.574874 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} err="failed to get container status \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": rpc error: code = NotFound desc = could not find container \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": container with ID starting with 1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.574896 4846 scope.go:117] "RemoveContainer" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.575232 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} err="failed to get container status \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": rpc error: code = NotFound desc = could not find container \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": container with ID starting with abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.575279 4846 scope.go:117] "RemoveContainer" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.576234 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} err="failed to get container status \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": rpc error: code = NotFound desc = could not find container \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": container with ID starting with 7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.576261 4846 scope.go:117] "RemoveContainer" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.576479 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} err="failed to get container status \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": rpc error: code = NotFound desc = could not find container \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": container with ID starting with c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.576581 4846 scope.go:117] "RemoveContainer" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.576926 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} err="failed to get container status \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": rpc error: code = NotFound desc = could not find container \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": container with ID starting with 4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.576955 4846 scope.go:117] "RemoveContainer" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.577327 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} err="failed to get container status \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": rpc error: code = NotFound desc = could not find container \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": container with ID starting with 974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.577398 4846 scope.go:117] "RemoveContainer" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.577970 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} err="failed to get container status \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": rpc error: code = NotFound desc = could not find container \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": container with ID starting with 939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.578119 4846 scope.go:117] "RemoveContainer" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.578558 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} err="failed to get container status \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": rpc error: code = NotFound desc = could not find container \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": container with ID starting with 2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.578645 4846 scope.go:117] "RemoveContainer" containerID="f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.579120 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2"} err="failed to get container status \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": rpc error: code = NotFound desc = could not find container \"f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2\": container with ID starting with f7b35313b4b2100213a7d87f98e368af32bcbaafa3c002e8414d3abd52feacb2 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.579142 4846 scope.go:117] "RemoveContainer" containerID="10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.579389 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59"} err="failed to get container status \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": rpc error: code = NotFound desc = could not find container \"10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59\": container with ID starting with 10bd886f21a4b49b6527604945cdfb8150b4ec8aa23c78641ad00ef4fa18fc59 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.579427 4846 scope.go:117] "RemoveContainer" containerID="1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.579744 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d"} err="failed to get container status \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": rpc error: code = NotFound desc = could not find container \"1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d\": container with ID starting with 1e19b06cae5c7893833e7474bbd97d107f23cd0d553682e4964ca3447987fa1d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.579834 4846 scope.go:117] "RemoveContainer" containerID="abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.580146 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a"} err="failed to get container status \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": rpc error: code = NotFound desc = could not find container \"abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a\": container with ID starting with abb76717e09040c9f6d3bcd9dfc48f61ee44802f2980265e81b9f72fd148643a not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.580170 4846 scope.go:117] "RemoveContainer" containerID="7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.580424 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c"} err="failed to get container status \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": rpc error: code = NotFound desc = could not find container \"7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c\": container with ID starting with 7d69d4488440dd9a62c121f83fcce385263e8714b4b79fdd73720f9e7a43c41c not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.580499 4846 scope.go:117] "RemoveContainer" containerID="c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.580899 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e"} err="failed to get container status \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": rpc error: code = NotFound desc = could not find container \"c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e\": container with ID starting with c767b4f9aca803b57a6a0fcd8b7e3fa06b0fef2f2f935925ff079ba2fc6ea49e not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.580979 4846 scope.go:117] "RemoveContainer" containerID="4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.581240 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d"} err="failed to get container status \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": rpc error: code = NotFound desc = could not find container \"4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d\": container with ID starting with 4ed358abc51c9a5da938cdc69ef9cd7876122adf7544b007e04473d9ce30115d not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.581307 4846 scope.go:117] "RemoveContainer" containerID="974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.581571 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630"} err="failed to get container status \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": rpc error: code = NotFound desc = could not find container \"974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630\": container with ID starting with 974788d538710611881c8eb510e6fe022425936bf2aa00cff3f1a6159b0b2630 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.581592 4846 scope.go:117] "RemoveContainer" containerID="939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.581818 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487"} err="failed to get container status \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": rpc error: code = NotFound desc = could not find container \"939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487\": container with ID starting with 939a9941600825e61095d208266e98a374eb63bd858ed50c4d1ab0a04e9ce487 not found: ID does not exist" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.581903 4846 scope.go:117] "RemoveContainer" containerID="2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435" Dec 01 00:19:32 crc kubenswrapper[4846]: I1201 00:19:32.582237 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435"} err="failed to get container status \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": rpc error: code = NotFound desc = could not find container \"2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435\": container with ID starting with 2005f6269c48d9c36577b585fb9233a8d0ddd48c2f2f10e34881498f519f0435 not found: ID does not exist" Dec 01 00:19:33 crc kubenswrapper[4846]: I1201 00:19:33.297855 4846 generic.go:334] "Generic (PLEG): container finished" podID="c4f3ac75-6848-42b0-9508-ca561a5754f6" containerID="705260dbd4dd4c30b0c084a88dcf79fbb949cd2f943d5765fa72ba8ba5ad5227" exitCode=0 Dec 01 00:19:33 crc kubenswrapper[4846]: I1201 00:19:33.297935 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerDied","Data":"705260dbd4dd4c30b0c084a88dcf79fbb949cd2f943d5765fa72ba8ba5ad5227"} Dec 01 00:19:33 crc kubenswrapper[4846]: I1201 00:19:33.302015 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gzjjx_607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c/kube-multus/2.log" Dec 01 00:19:33 crc kubenswrapper[4846]: I1201 00:19:33.302134 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gzjjx" event={"ID":"607ab1f5-f2a2-4966-90d8-65d7d8fe0b9c","Type":"ContainerStarted","Data":"d6d1fbf1a865b073312f61ad32f85fce23c9312b141b2574e211da85e2f98e8e"} Dec 01 00:19:33 crc kubenswrapper[4846]: I1201 00:19:33.590047 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="358371ac-c594-492b-98ad-0da4bc7d9d16" path="/var/lib/kubelet/pods/358371ac-c594-492b-98ad-0da4bc7d9d16/volumes" Dec 01 00:19:33 crc kubenswrapper[4846]: I1201 00:19:33.591818 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ae23581-006a-44dd-aae2-d85d847dda2e" path="/var/lib/kubelet/pods/8ae23581-006a-44dd-aae2-d85d847dda2e/volumes" Dec 01 00:19:34 crc kubenswrapper[4846]: I1201 00:19:34.339345 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"993786d0b248c99baad94acbb6392408380dae53e96056d4a28207fa609da3b9"} Dec 01 00:19:34 crc kubenswrapper[4846]: I1201 00:19:34.339743 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"4aafd87aa2495dcc65935c0b159a9356cda20e6d46441ab44f2ce36d5ef2a0f5"} Dec 01 00:19:34 crc kubenswrapper[4846]: I1201 00:19:34.339764 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"2ca2b448e58f66d44b544b4fa7a34ecbc6e4727e1932bdec14bd48b258e9e058"} Dec 01 00:19:34 crc kubenswrapper[4846]: I1201 00:19:34.339778 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"5e3f172d4c3b68ca1731005b9b3a23dc260b20c0aef73de753f92c629d490cdf"} Dec 01 00:19:34 crc kubenswrapper[4846]: I1201 00:19:34.339791 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"730512f20947e126d90327843123a762a2ed998716804b59257c7f2a8f7dee59"} Dec 01 00:19:35 crc kubenswrapper[4846]: I1201 00:19:35.348965 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"a0d4c27f90bf1fe0548fe90c85b5b9b78cd3f647d175de9d0755f0591e6058d9"} Dec 01 00:19:38 crc kubenswrapper[4846]: I1201 00:19:38.371388 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"75e1ce87c528a4854ea0d9d86f174ac96524d32f49e109ec007d3b359942f1a4"} Dec 01 00:19:42 crc kubenswrapper[4846]: I1201 00:19:42.400733 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" event={"ID":"c4f3ac75-6848-42b0-9508-ca561a5754f6","Type":"ContainerStarted","Data":"f4743810cb7963413ab513cbd1037cbef466ad02d6aafbf900ce7ec60fe28f36"} Dec 01 00:19:42 crc kubenswrapper[4846]: I1201 00:19:42.401021 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:42 crc kubenswrapper[4846]: I1201 00:19:42.401181 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:42 crc kubenswrapper[4846]: I1201 00:19:42.428275 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" podStartSLOduration=11.428241716 podStartE2EDuration="11.428241716s" podCreationTimestamp="2025-12-01 00:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:19:42.42515574 +0000 UTC m=+803.205924814" watchObservedRunningTime="2025-12-01 00:19:42.428241716 +0000 UTC m=+803.209010790" Dec 01 00:19:42 crc kubenswrapper[4846]: I1201 00:19:42.428383 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:43 crc kubenswrapper[4846]: I1201 00:19:43.407452 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:43 crc kubenswrapper[4846]: I1201 00:19:43.437220 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:19:55 crc kubenswrapper[4846]: I1201 00:19:55.420194 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:19:55 crc kubenswrapper[4846]: I1201 00:19:55.420947 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:19:55 crc kubenswrapper[4846]: I1201 00:19:55.421101 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:19:55 crc kubenswrapper[4846]: I1201 00:19:55.421921 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"102b53cf93c8c6c4ec883c1482afd13bc556f199bcc1fe562a732d196e301581"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:19:55 crc kubenswrapper[4846]: I1201 00:19:55.421991 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://102b53cf93c8c6c4ec883c1482afd13bc556f199bcc1fe562a732d196e301581" gracePeriod=600 Dec 01 00:19:56 crc kubenswrapper[4846]: I1201 00:19:56.486971 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="102b53cf93c8c6c4ec883c1482afd13bc556f199bcc1fe562a732d196e301581" exitCode=0 Dec 01 00:19:56 crc kubenswrapper[4846]: I1201 00:19:56.487060 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"102b53cf93c8c6c4ec883c1482afd13bc556f199bcc1fe562a732d196e301581"} Dec 01 00:19:56 crc kubenswrapper[4846]: I1201 00:19:56.487556 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"026040bd9bee15e6472842d00a2ed6b5f6286335549254765c690aa85dd16d9e"} Dec 01 00:19:56 crc kubenswrapper[4846]: I1201 00:19:56.487592 4846 scope.go:117] "RemoveContainer" containerID="bf97e1048be4b0c031fde4887c47a3c0d4fc2d3018cc03b3d309a8d4b1baba7c" Dec 01 00:20:02 crc kubenswrapper[4846]: I1201 00:20:02.114306 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6chmp" Dec 01 00:20:59 crc kubenswrapper[4846]: I1201 00:20:59.857215 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkjg4"] Dec 01 00:20:59 crc kubenswrapper[4846]: I1201 00:20:59.858066 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkjg4" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="registry-server" containerID="cri-o://2196f46f38fef7617ea6605c43660ec5a71069c5e8ac744625f8cfa0b3c6d76d" gracePeriod=30 Dec 01 00:21:00 crc kubenswrapper[4846]: E1201 00:21:00.054525 4846 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4dea1cf_d4a8_4f59_b662_3f73f2c51672.slice/crio-conmon-2196f46f38fef7617ea6605c43660ec5a71069c5e8ac744625f8cfa0b3c6d76d.scope\": RecentStats: unable to find data in memory cache]" Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.818065 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerID="2196f46f38fef7617ea6605c43660ec5a71069c5e8ac744625f8cfa0b3c6d76d" exitCode=0 Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.818115 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerDied","Data":"2196f46f38fef7617ea6605c43660ec5a71069c5e8ac744625f8cfa0b3c6d76d"} Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.818143 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkjg4" event={"ID":"b4dea1cf-d4a8-4f59-b662-3f73f2c51672","Type":"ContainerDied","Data":"3c5378cffb5784f5f430e1345dc7f538bb232c989d9ae66e5a2851b2d5ec55a0"} Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.818158 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c5378cffb5784f5f430e1345dc7f538bb232c989d9ae66e5a2851b2d5ec55a0" Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.831860 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.941757 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-catalog-content\") pod \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.942125 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-utilities\") pod \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.942191 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4r8q\" (UniqueName: \"kubernetes.io/projected/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-kube-api-access-n4r8q\") pod \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\" (UID: \"b4dea1cf-d4a8-4f59-b662-3f73f2c51672\") " Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.942925 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-utilities" (OuterVolumeSpecName: "utilities") pod "b4dea1cf-d4a8-4f59-b662-3f73f2c51672" (UID: "b4dea1cf-d4a8-4f59-b662-3f73f2c51672"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.948289 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-kube-api-access-n4r8q" (OuterVolumeSpecName: "kube-api-access-n4r8q") pod "b4dea1cf-d4a8-4f59-b662-3f73f2c51672" (UID: "b4dea1cf-d4a8-4f59-b662-3f73f2c51672"). InnerVolumeSpecName "kube-api-access-n4r8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:00 crc kubenswrapper[4846]: I1201 00:21:00.961545 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4dea1cf-d4a8-4f59-b662-3f73f2c51672" (UID: "b4dea1cf-d4a8-4f59-b662-3f73f2c51672"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:01 crc kubenswrapper[4846]: I1201 00:21:01.043306 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:01 crc kubenswrapper[4846]: I1201 00:21:01.043343 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:01 crc kubenswrapper[4846]: I1201 00:21:01.043352 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4r8q\" (UniqueName: \"kubernetes.io/projected/b4dea1cf-d4a8-4f59-b662-3f73f2c51672-kube-api-access-n4r8q\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:01 crc kubenswrapper[4846]: I1201 00:21:01.823215 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkjg4" Dec 01 00:21:01 crc kubenswrapper[4846]: I1201 00:21:01.840292 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkjg4"] Dec 01 00:21:01 crc kubenswrapper[4846]: I1201 00:21:01.843506 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkjg4"] Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.589098 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" path="/var/lib/kubelet/pods/b4dea1cf-d4a8-4f59-b662-3f73f2c51672/volumes" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.980178 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg"] Dec 01 00:21:03 crc kubenswrapper[4846]: E1201 00:21:03.980422 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="registry-server" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.980436 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="registry-server" Dec 01 00:21:03 crc kubenswrapper[4846]: E1201 00:21:03.980450 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="extract-utilities" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.980458 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="extract-utilities" Dec 01 00:21:03 crc kubenswrapper[4846]: E1201 00:21:03.980476 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="extract-content" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.980484 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="extract-content" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.980593 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4dea1cf-d4a8-4f59-b662-3f73f2c51672" containerName="registry-server" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.981370 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.982200 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzn96\" (UniqueName: \"kubernetes.io/projected/c5b5d881-fbb7-405c-92d5-f00c25d0c405-kube-api-access-lzn96\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.982319 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.982354 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.984322 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 01 00:21:03 crc kubenswrapper[4846]: I1201 00:21:03.992190 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg"] Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.083475 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzn96\" (UniqueName: \"kubernetes.io/projected/c5b5d881-fbb7-405c-92d5-f00c25d0c405-kube-api-access-lzn96\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.083651 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.083817 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.084222 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.084239 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.105839 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzn96\" (UniqueName: \"kubernetes.io/projected/c5b5d881-fbb7-405c-92d5-f00c25d0c405-kube-api-access-lzn96\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.303659 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.529978 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg"] Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.848969 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerStarted","Data":"3f3c6e52f95359b7e35000d40ecce2b4a34726ad62987c590db3d1573b712f92"} Dec 01 00:21:04 crc kubenswrapper[4846]: I1201 00:21:04.849023 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerStarted","Data":"c9d8422eeaf867d392599ffb3e2e7cfae6b37ea47872ab6475ffc0b660754941"} Dec 01 00:21:05 crc kubenswrapper[4846]: I1201 00:21:05.855128 4846 generic.go:334] "Generic (PLEG): container finished" podID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerID="3f3c6e52f95359b7e35000d40ecce2b4a34726ad62987c590db3d1573b712f92" exitCode=0 Dec 01 00:21:05 crc kubenswrapper[4846]: I1201 00:21:05.855183 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerDied","Data":"3f3c6e52f95359b7e35000d40ecce2b4a34726ad62987c590db3d1573b712f92"} Dec 01 00:21:05 crc kubenswrapper[4846]: I1201 00:21:05.857237 4846 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.528814 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kgr6q"] Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.529932 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.545451 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kgr6q"] Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.625741 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-utilities\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.625806 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtc8b\" (UniqueName: \"kubernetes.io/projected/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-kube-api-access-jtc8b\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.625994 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-catalog-content\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.726958 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-catalog-content\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.727043 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-utilities\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.727073 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtc8b\" (UniqueName: \"kubernetes.io/projected/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-kube-api-access-jtc8b\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.727489 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-catalog-content\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.727538 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-utilities\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.749614 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtc8b\" (UniqueName: \"kubernetes.io/projected/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-kube-api-access-jtc8b\") pod \"redhat-operators-kgr6q\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:06 crc kubenswrapper[4846]: I1201 00:21:06.848075 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:07 crc kubenswrapper[4846]: I1201 00:21:07.179502 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kgr6q"] Dec 01 00:21:07 crc kubenswrapper[4846]: W1201 00:21:07.189575 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfffa7386_acd5_49bd_b82c_c1e8c6f337e4.slice/crio-fe9b407d9faeca9f8c43aab0974d6923a3b2554b0af1a56be76f208200fbe677 WatchSource:0}: Error finding container fe9b407d9faeca9f8c43aab0974d6923a3b2554b0af1a56be76f208200fbe677: Status 404 returned error can't find the container with id fe9b407d9faeca9f8c43aab0974d6923a3b2554b0af1a56be76f208200fbe677 Dec 01 00:21:07 crc kubenswrapper[4846]: I1201 00:21:07.880277 4846 generic.go:334] "Generic (PLEG): container finished" podID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerID="ad9c3aeea50e33f38b0dd453734a377c6e88511bf8ed60f0317455f16badb215" exitCode=0 Dec 01 00:21:07 crc kubenswrapper[4846]: I1201 00:21:07.880333 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerDied","Data":"ad9c3aeea50e33f38b0dd453734a377c6e88511bf8ed60f0317455f16badb215"} Dec 01 00:21:07 crc kubenswrapper[4846]: I1201 00:21:07.880750 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerStarted","Data":"fe9b407d9faeca9f8c43aab0974d6923a3b2554b0af1a56be76f208200fbe677"} Dec 01 00:21:08 crc kubenswrapper[4846]: I1201 00:21:08.890938 4846 generic.go:334] "Generic (PLEG): container finished" podID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerID="afdc879841c56bce045f461e1a0cb8e26f45e6535898d03373fcbbcc173b602c" exitCode=0 Dec 01 00:21:08 crc kubenswrapper[4846]: I1201 00:21:08.891011 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerDied","Data":"afdc879841c56bce045f461e1a0cb8e26f45e6535898d03373fcbbcc173b602c"} Dec 01 00:21:10 crc kubenswrapper[4846]: I1201 00:21:10.903885 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerStarted","Data":"d16dc2871de69878913bf9d4110590c1e398d5c3a8399cb01cb9198af1cf1124"} Dec 01 00:21:11 crc kubenswrapper[4846]: I1201 00:21:11.910886 4846 generic.go:334] "Generic (PLEG): container finished" podID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerID="d16dc2871de69878913bf9d4110590c1e398d5c3a8399cb01cb9198af1cf1124" exitCode=0 Dec 01 00:21:11 crc kubenswrapper[4846]: I1201 00:21:11.910934 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerDied","Data":"d16dc2871de69878913bf9d4110590c1e398d5c3a8399cb01cb9198af1cf1124"} Dec 01 00:21:12 crc kubenswrapper[4846]: I1201 00:21:12.916592 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerStarted","Data":"569676ad1ec3dff0eabaec4adb0a1179b30563f720411adc4dad84dda338ac7c"} Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.508350 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.522247 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-bundle\") pod \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.522350 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-util\") pod \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.522459 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzn96\" (UniqueName: \"kubernetes.io/projected/c5b5d881-fbb7-405c-92d5-f00c25d0c405-kube-api-access-lzn96\") pod \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\" (UID: \"c5b5d881-fbb7-405c-92d5-f00c25d0c405\") " Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.524252 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-bundle" (OuterVolumeSpecName: "bundle") pod "c5b5d881-fbb7-405c-92d5-f00c25d0c405" (UID: "c5b5d881-fbb7-405c-92d5-f00c25d0c405"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.534955 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-util" (OuterVolumeSpecName: "util") pod "c5b5d881-fbb7-405c-92d5-f00c25d0c405" (UID: "c5b5d881-fbb7-405c-92d5-f00c25d0c405"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.598507 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b5d881-fbb7-405c-92d5-f00c25d0c405-kube-api-access-lzn96" (OuterVolumeSpecName: "kube-api-access-lzn96") pod "c5b5d881-fbb7-405c-92d5-f00c25d0c405" (UID: "c5b5d881-fbb7-405c-92d5-f00c25d0c405"). InnerVolumeSpecName "kube-api-access-lzn96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.600138 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn"] Dec 01 00:21:13 crc kubenswrapper[4846]: E1201 00:21:13.600539 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="extract" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.600789 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="extract" Dec 01 00:21:13 crc kubenswrapper[4846]: E1201 00:21:13.600877 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="util" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.600963 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="util" Dec 01 00:21:13 crc kubenswrapper[4846]: E1201 00:21:13.601039 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="pull" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.601183 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="pull" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.601452 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b5d881-fbb7-405c-92d5-f00c25d0c405" containerName="extract" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.603635 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn"] Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.603955 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.624272 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.624375 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87t9\" (UniqueName: \"kubernetes.io/projected/60809e12-6548-49c7-9873-a97d3603e686-kube-api-access-w87t9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.624424 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.624642 4846 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.624718 4846 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b5d881-fbb7-405c-92d5-f00c25d0c405-util\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.624738 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzn96\" (UniqueName: \"kubernetes.io/projected/c5b5d881-fbb7-405c-92d5-f00c25d0c405-kube-api-access-lzn96\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.725554 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.725672 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.725764 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w87t9\" (UniqueName: \"kubernetes.io/projected/60809e12-6548-49c7-9873-a97d3603e686-kube-api-access-w87t9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.726324 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.726331 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.935459 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87t9\" (UniqueName: \"kubernetes.io/projected/60809e12-6548-49c7-9873-a97d3603e686-kube-api-access-w87t9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.939856 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" event={"ID":"c5b5d881-fbb7-405c-92d5-f00c25d0c405","Type":"ContainerDied","Data":"c9d8422eeaf867d392599ffb3e2e7cfae6b37ea47872ab6475ffc0b660754941"} Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.939908 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9d8422eeaf867d392599ffb3e2e7cfae6b37ea47872ab6475ffc0b660754941" Dec 01 00:21:13 crc kubenswrapper[4846]: I1201 00:21:13.939912 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.226912 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.618204 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn"] Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.619565 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.645644 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn"] Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.741098 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.741501 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.741747 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7hbj\" (UniqueName: \"kubernetes.io/projected/d68370f6-b067-4f78-b8c2-6ed2892b65ae-kube-api-access-h7hbj\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.842673 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7hbj\" (UniqueName: \"kubernetes.io/projected/d68370f6-b067-4f78-b8c2-6ed2892b65ae-kube-api-access-h7hbj\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.842832 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.842875 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.843477 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.843599 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:14 crc kubenswrapper[4846]: I1201 00:21:14.866007 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7hbj\" (UniqueName: \"kubernetes.io/projected/d68370f6-b067-4f78-b8c2-6ed2892b65ae-kube-api-access-h7hbj\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:15 crc kubenswrapper[4846]: I1201 00:21:15.046375 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:15 crc kubenswrapper[4846]: I1201 00:21:15.076984 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn"] Dec 01 00:21:15 crc kubenswrapper[4846]: W1201 00:21:15.093312 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60809e12_6548_49c7_9873_a97d3603e686.slice/crio-b9a2dc861c2c7a4db84ceb1f997c594b4226f700a2a4641e9d373c7a110b6a47 WatchSource:0}: Error finding container b9a2dc861c2c7a4db84ceb1f997c594b4226f700a2a4641e9d373c7a110b6a47: Status 404 returned error can't find the container with id b9a2dc861c2c7a4db84ceb1f997c594b4226f700a2a4641e9d373c7a110b6a47 Dec 01 00:21:15 crc kubenswrapper[4846]: I1201 00:21:15.338703 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn"] Dec 01 00:21:15 crc kubenswrapper[4846]: W1201 00:21:15.341587 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd68370f6_b067_4f78_b8c2_6ed2892b65ae.slice/crio-60a3d4d126e5e3ef0ad938ca22f735ab638487f0056a35d51bef7a61149fc6a5 WatchSource:0}: Error finding container 60a3d4d126e5e3ef0ad938ca22f735ab638487f0056a35d51bef7a61149fc6a5: Status 404 returned error can't find the container with id 60a3d4d126e5e3ef0ad938ca22f735ab638487f0056a35d51bef7a61149fc6a5 Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.063979 4846 generic.go:334] "Generic (PLEG): container finished" podID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerID="569676ad1ec3dff0eabaec4adb0a1179b30563f720411adc4dad84dda338ac7c" exitCode=0 Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.064066 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerDied","Data":"569676ad1ec3dff0eabaec4adb0a1179b30563f720411adc4dad84dda338ac7c"} Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.068351 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerStarted","Data":"c1603a7382b082f930d0c0e37146ea8abc3dfb47bf8204d2806caaa66487500e"} Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.068566 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerStarted","Data":"60a3d4d126e5e3ef0ad938ca22f735ab638487f0056a35d51bef7a61149fc6a5"} Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.075895 4846 generic.go:334] "Generic (PLEG): container finished" podID="60809e12-6548-49c7-9873-a97d3603e686" containerID="2b9b08986dc159f3ae527645ad972ecd9094c00c5301bf2c7c71cdb060c58088" exitCode=0 Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.076020 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" event={"ID":"60809e12-6548-49c7-9873-a97d3603e686","Type":"ContainerDied","Data":"2b9b08986dc159f3ae527645ad972ecd9094c00c5301bf2c7c71cdb060c58088"} Dec 01 00:21:16 crc kubenswrapper[4846]: I1201 00:21:16.076382 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" event={"ID":"60809e12-6548-49c7-9873-a97d3603e686","Type":"ContainerStarted","Data":"b9a2dc861c2c7a4db84ceb1f997c594b4226f700a2a4641e9d373c7a110b6a47"} Dec 01 00:21:17 crc kubenswrapper[4846]: I1201 00:21:17.083018 4846 generic.go:334] "Generic (PLEG): container finished" podID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerID="c1603a7382b082f930d0c0e37146ea8abc3dfb47bf8204d2806caaa66487500e" exitCode=0 Dec 01 00:21:17 crc kubenswrapper[4846]: I1201 00:21:17.083089 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerDied","Data":"c1603a7382b082f930d0c0e37146ea8abc3dfb47bf8204d2806caaa66487500e"} Dec 01 00:21:17 crc kubenswrapper[4846]: I1201 00:21:17.091813 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerStarted","Data":"c3c77c2f8cba3ff4eadbb9d9cc095a1bc3673862457192b2bb2319fe33552e08"} Dec 01 00:21:17 crc kubenswrapper[4846]: I1201 00:21:17.121957 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kgr6q" podStartSLOduration=2.452638727 podStartE2EDuration="11.121933166s" podCreationTimestamp="2025-12-01 00:21:06 +0000 UTC" firstStartedPulling="2025-12-01 00:21:07.882717711 +0000 UTC m=+888.663486785" lastFinishedPulling="2025-12-01 00:21:16.55201215 +0000 UTC m=+897.332781224" observedRunningTime="2025-12-01 00:21:17.120312966 +0000 UTC m=+897.901082060" watchObservedRunningTime="2025-12-01 00:21:17.121933166 +0000 UTC m=+897.902702240" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.100903 4846 generic.go:334] "Generic (PLEG): container finished" podID="60809e12-6548-49c7-9873-a97d3603e686" containerID="3b5994ed074d5454174eb0f780b1caa0e2e573b8c156986b9e8eaa04abdaf3cb" exitCode=0 Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.100966 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" event={"ID":"60809e12-6548-49c7-9873-a97d3603e686","Type":"ContainerDied","Data":"3b5994ed074d5454174eb0f780b1caa0e2e573b8c156986b9e8eaa04abdaf3cb"} Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.631963 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7"] Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.633676 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.635866 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7"] Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.738420 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdk9q"] Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.740065 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.742960 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdk9q"] Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.752278 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2nvj\" (UniqueName: \"kubernetes.io/projected/018280e5-7f09-4ed8-81b4-0c26013fa732-kube-api-access-b2nvj\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.752659 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.752766 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.854702 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtr8\" (UniqueName: \"kubernetes.io/projected/9a0bd765-e41c-4343-bf55-ccd424ee75d3-kube-api-access-xvtr8\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.855087 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-utilities\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.855245 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2nvj\" (UniqueName: \"kubernetes.io/projected/018280e5-7f09-4ed8-81b4-0c26013fa732-kube-api-access-b2nvj\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.855350 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.855487 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-catalog-content\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.855575 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.856021 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.856142 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.880101 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2nvj\" (UniqueName: \"kubernetes.io/projected/018280e5-7f09-4ed8-81b4-0c26013fa732-kube-api-access-b2nvj\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.958941 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-catalog-content\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.959327 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtr8\" (UniqueName: \"kubernetes.io/projected/9a0bd765-e41c-4343-bf55-ccd424ee75d3-kube-api-access-xvtr8\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.959476 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-utilities\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.959879 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-catalog-content\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.960004 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-utilities\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.962222 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:18 crc kubenswrapper[4846]: I1201 00:21:18.982408 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtr8\" (UniqueName: \"kubernetes.io/projected/9a0bd765-e41c-4343-bf55-ccd424ee75d3-kube-api-access-xvtr8\") pod \"certified-operators-rdk9q\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:19 crc kubenswrapper[4846]: I1201 00:21:19.062943 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:19 crc kubenswrapper[4846]: I1201 00:21:19.110756 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" event={"ID":"60809e12-6548-49c7-9873-a97d3603e686","Type":"ContainerStarted","Data":"aaed94a7566b81cfbc7606e9f0c6bff0aaef33f1e74110bb1cb0e9f72f095330"} Dec 01 00:21:19 crc kubenswrapper[4846]: I1201 00:21:19.138264 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" podStartSLOduration=5.237956947 podStartE2EDuration="6.138246391s" podCreationTimestamp="2025-12-01 00:21:13 +0000 UTC" firstStartedPulling="2025-12-01 00:21:16.07866736 +0000 UTC m=+896.859436454" lastFinishedPulling="2025-12-01 00:21:16.978956824 +0000 UTC m=+897.759725898" observedRunningTime="2025-12-01 00:21:19.134054211 +0000 UTC m=+899.914823285" watchObservedRunningTime="2025-12-01 00:21:19.138246391 +0000 UTC m=+899.919015465" Dec 01 00:21:19 crc kubenswrapper[4846]: I1201 00:21:19.648202 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdk9q"] Dec 01 00:21:19 crc kubenswrapper[4846]: I1201 00:21:19.881369 4846 scope.go:117] "RemoveContainer" containerID="c3a7aea00d06204d7fca285a6ce565e0bcfa13931f1b12fae8bb4b05d936f484" Dec 01 00:21:19 crc kubenswrapper[4846]: I1201 00:21:19.986085 4846 scope.go:117] "RemoveContainer" containerID="2196f46f38fef7617ea6605c43660ec5a71069c5e8ac744625f8cfa0b3c6d76d" Dec 01 00:21:20 crc kubenswrapper[4846]: I1201 00:21:20.005858 4846 scope.go:117] "RemoveContainer" containerID="cbf4c831cda2e0cc44628a820f1310c36311b0990db091236f394107c86a5b58" Dec 01 00:21:20 crc kubenswrapper[4846]: I1201 00:21:20.025062 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7"] Dec 01 00:21:20 crc kubenswrapper[4846]: W1201 00:21:20.054930 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod018280e5_7f09_4ed8_81b4_0c26013fa732.slice/crio-10c3cb3598ff8e921ee96c6e40ad63ae4b21abeadb1478933598d4d68594f184 WatchSource:0}: Error finding container 10c3cb3598ff8e921ee96c6e40ad63ae4b21abeadb1478933598d4d68594f184: Status 404 returned error can't find the container with id 10c3cb3598ff8e921ee96c6e40ad63ae4b21abeadb1478933598d4d68594f184 Dec 01 00:21:20 crc kubenswrapper[4846]: I1201 00:21:20.118311 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerStarted","Data":"8ee5bcf9c192b6a468bbf712301266e7961a5df57e00feae17009b2523c54d29"} Dec 01 00:21:20 crc kubenswrapper[4846]: I1201 00:21:20.123624 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerStarted","Data":"10c3cb3598ff8e921ee96c6e40ad63ae4b21abeadb1478933598d4d68594f184"} Dec 01 00:21:20 crc kubenswrapper[4846]: I1201 00:21:20.125665 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerStarted","Data":"9e8562b00001e3f1bd3e429396aad09288655a813b8efe1889adcd9b99b72e86"} Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.132574 4846 generic.go:334] "Generic (PLEG): container finished" podID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerID="8ee5bcf9c192b6a468bbf712301266e7961a5df57e00feae17009b2523c54d29" exitCode=0 Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.132699 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerDied","Data":"8ee5bcf9c192b6a468bbf712301266e7961a5df57e00feae17009b2523c54d29"} Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.136639 4846 generic.go:334] "Generic (PLEG): container finished" podID="60809e12-6548-49c7-9873-a97d3603e686" containerID="aaed94a7566b81cfbc7606e9f0c6bff0aaef33f1e74110bb1cb0e9f72f095330" exitCode=0 Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.136734 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" event={"ID":"60809e12-6548-49c7-9873-a97d3603e686","Type":"ContainerDied","Data":"aaed94a7566b81cfbc7606e9f0c6bff0aaef33f1e74110bb1cb0e9f72f095330"} Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.138502 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerStarted","Data":"c38fac507342c2572c7d8417df4d86be0e77278f480c6ece7f9550c2764669cc"} Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.151709 4846 generic.go:334] "Generic (PLEG): container finished" podID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerID="9ef489707006f60146c1c170e07cd1bb906dfe320628d41e020b0d8e43fdf314" exitCode=0 Dec 01 00:21:21 crc kubenswrapper[4846]: I1201 00:21:21.151768 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerDied","Data":"9ef489707006f60146c1c170e07cd1bb906dfe320628d41e020b0d8e43fdf314"} Dec 01 00:21:22 crc kubenswrapper[4846]: I1201 00:21:22.161986 4846 generic.go:334] "Generic (PLEG): container finished" podID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerID="8de22f6010c4adb97e264dff21aab50b920b2239a91d9f021394a91647b2bae1" exitCode=0 Dec 01 00:21:22 crc kubenswrapper[4846]: I1201 00:21:22.162518 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerDied","Data":"8de22f6010c4adb97e264dff21aab50b920b2239a91d9f021394a91647b2bae1"} Dec 01 00:21:22 crc kubenswrapper[4846]: I1201 00:21:22.164596 4846 generic.go:334] "Generic (PLEG): container finished" podID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerID="c38fac507342c2572c7d8417df4d86be0e77278f480c6ece7f9550c2764669cc" exitCode=0 Dec 01 00:21:22 crc kubenswrapper[4846]: I1201 00:21:22.164983 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerDied","Data":"c38fac507342c2572c7d8417df4d86be0e77278f480c6ece7f9550c2764669cc"} Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.187769 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" event={"ID":"60809e12-6548-49c7-9873-a97d3603e686","Type":"ContainerDied","Data":"b9a2dc861c2c7a4db84ceb1f997c594b4226f700a2a4641e9d373c7a110b6a47"} Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.188411 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a2dc861c2c7a4db84ceb1f997c594b4226f700a2a4641e9d373c7a110b6a47" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.191247 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerStarted","Data":"e0639ef932293b7bc94ae355b088e4bd85b65ff5bb46929090e3b8b3635fb0a2"} Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.252223 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.404508 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w87t9\" (UniqueName: \"kubernetes.io/projected/60809e12-6548-49c7-9873-a97d3603e686-kube-api-access-w87t9\") pod \"60809e12-6548-49c7-9873-a97d3603e686\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.404633 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-util\") pod \"60809e12-6548-49c7-9873-a97d3603e686\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.404654 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-bundle\") pod \"60809e12-6548-49c7-9873-a97d3603e686\" (UID: \"60809e12-6548-49c7-9873-a97d3603e686\") " Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.405737 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-bundle" (OuterVolumeSpecName: "bundle") pod "60809e12-6548-49c7-9873-a97d3603e686" (UID: "60809e12-6548-49c7-9873-a97d3603e686"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.432739 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-util" (OuterVolumeSpecName: "util") pod "60809e12-6548-49c7-9873-a97d3603e686" (UID: "60809e12-6548-49c7-9873-a97d3603e686"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.437571 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60809e12-6548-49c7-9873-a97d3603e686-kube-api-access-w87t9" (OuterVolumeSpecName: "kube-api-access-w87t9") pod "60809e12-6548-49c7-9873-a97d3603e686" (UID: "60809e12-6548-49c7-9873-a97d3603e686"). InnerVolumeSpecName "kube-api-access-w87t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.509492 4846 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-util\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.509561 4846 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60809e12-6548-49c7-9873-a97d3603e686-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.509576 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w87t9\" (UniqueName: \"kubernetes.io/projected/60809e12-6548-49c7-9873-a97d3603e686-kube-api-access-w87t9\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.718900 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.920138 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-bundle\") pod \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.920301 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7hbj\" (UniqueName: \"kubernetes.io/projected/d68370f6-b067-4f78-b8c2-6ed2892b65ae-kube-api-access-h7hbj\") pod \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.920343 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-util\") pod \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\" (UID: \"d68370f6-b067-4f78-b8c2-6ed2892b65ae\") " Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.921156 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-bundle" (OuterVolumeSpecName: "bundle") pod "d68370f6-b067-4f78-b8c2-6ed2892b65ae" (UID: "d68370f6-b067-4f78-b8c2-6ed2892b65ae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.926018 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68370f6-b067-4f78-b8c2-6ed2892b65ae-kube-api-access-h7hbj" (OuterVolumeSpecName: "kube-api-access-h7hbj") pod "d68370f6-b067-4f78-b8c2-6ed2892b65ae" (UID: "d68370f6-b067-4f78-b8c2-6ed2892b65ae"). InnerVolumeSpecName "kube-api-access-h7hbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:23 crc kubenswrapper[4846]: I1201 00:21:23.934750 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-util" (OuterVolumeSpecName: "util") pod "d68370f6-b067-4f78-b8c2-6ed2892b65ae" (UID: "d68370f6-b067-4f78-b8c2-6ed2892b65ae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.021536 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7hbj\" (UniqueName: \"kubernetes.io/projected/d68370f6-b067-4f78-b8c2-6ed2892b65ae-kube-api-access-h7hbj\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.021595 4846 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-util\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.021607 4846 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d68370f6-b067-4f78-b8c2-6ed2892b65ae-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.199303 4846 generic.go:334] "Generic (PLEG): container finished" podID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerID="e0639ef932293b7bc94ae355b088e4bd85b65ff5bb46929090e3b8b3635fb0a2" exitCode=0 Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.199382 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerDied","Data":"e0639ef932293b7bc94ae355b088e4bd85b65ff5bb46929090e3b8b3635fb0a2"} Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.217054 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn" Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.217856 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.217961 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn" event={"ID":"d68370f6-b067-4f78-b8c2-6ed2892b65ae","Type":"ContainerDied","Data":"60a3d4d126e5e3ef0ad938ca22f735ab638487f0056a35d51bef7a61149fc6a5"} Dec 01 00:21:24 crc kubenswrapper[4846]: I1201 00:21:24.218066 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a3d4d126e5e3ef0ad938ca22f735ab638487f0056a35d51bef7a61149fc6a5" Dec 01 00:21:26 crc kubenswrapper[4846]: I1201 00:21:26.271827 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerStarted","Data":"3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890"} Dec 01 00:21:26 crc kubenswrapper[4846]: I1201 00:21:26.324803 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdk9q" podStartSLOduration=4.025550848 podStartE2EDuration="8.324745148s" podCreationTimestamp="2025-12-01 00:21:18 +0000 UTC" firstStartedPulling="2025-12-01 00:21:21.157669653 +0000 UTC m=+901.938438727" lastFinishedPulling="2025-12-01 00:21:25.456863943 +0000 UTC m=+906.237633027" observedRunningTime="2025-12-01 00:21:26.306910532 +0000 UTC m=+907.087679626" watchObservedRunningTime="2025-12-01 00:21:26.324745148 +0000 UTC m=+907.105514232" Dec 01 00:21:26 crc kubenswrapper[4846]: I1201 00:21:26.848712 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:26 crc kubenswrapper[4846]: I1201 00:21:26.849282 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:26 crc kubenswrapper[4846]: I1201 00:21:26.970889 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:27 crc kubenswrapper[4846]: I1201 00:21:27.438696 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:28 crc kubenswrapper[4846]: I1201 00:21:28.797502 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kgr6q"] Dec 01 00:21:29 crc kubenswrapper[4846]: I1201 00:21:29.064302 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:29 crc kubenswrapper[4846]: I1201 00:21:29.064374 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:29 crc kubenswrapper[4846]: I1201 00:21:29.303112 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kgr6q" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="registry-server" containerID="cri-o://c3c77c2f8cba3ff4eadbb9d9cc095a1bc3673862457192b2bb2319fe33552e08" gracePeriod=2 Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.259513 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rdk9q" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="registry-server" probeResult="failure" output=< Dec 01 00:21:30 crc kubenswrapper[4846]: timeout: failed to connect service ":50051" within 1s Dec 01 00:21:30 crc kubenswrapper[4846]: > Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331496 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-5dzc7"] Dec 01 00:21:30 crc kubenswrapper[4846]: E1201 00:21:30.331811 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="extract" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331828 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="extract" Dec 01 00:21:30 crc kubenswrapper[4846]: E1201 00:21:30.331847 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="extract" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331859 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="extract" Dec 01 00:21:30 crc kubenswrapper[4846]: E1201 00:21:30.331876 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="pull" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331885 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="pull" Dec 01 00:21:30 crc kubenswrapper[4846]: E1201 00:21:30.331899 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="util" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331907 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="util" Dec 01 00:21:30 crc kubenswrapper[4846]: E1201 00:21:30.331921 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="util" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331929 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="util" Dec 01 00:21:30 crc kubenswrapper[4846]: E1201 00:21:30.331942 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="pull" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.331949 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="pull" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.332072 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68370f6-b067-4f78-b8c2-6ed2892b65ae" containerName="extract" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.332090 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="60809e12-6548-49c7-9873-a97d3603e686" containerName="extract" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.332583 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.335351 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.335569 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-6whgq" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.335604 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.344759 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-5dzc7"] Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.486059 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tktfs\" (UniqueName: \"kubernetes.io/projected/4654284a-ba35-403d-84ea-1e3f1dd8bc39-kube-api-access-tktfs\") pod \"interconnect-operator-5bb49f789d-5dzc7\" (UID: \"4654284a-ba35-403d-84ea-1e3f1dd8bc39\") " pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.587808 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tktfs\" (UniqueName: \"kubernetes.io/projected/4654284a-ba35-403d-84ea-1e3f1dd8bc39-kube-api-access-tktfs\") pod \"interconnect-operator-5bb49f789d-5dzc7\" (UID: \"4654284a-ba35-403d-84ea-1e3f1dd8bc39\") " pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.626710 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tktfs\" (UniqueName: \"kubernetes.io/projected/4654284a-ba35-403d-84ea-1e3f1dd8bc39-kube-api-access-tktfs\") pod \"interconnect-operator-5bb49f789d-5dzc7\" (UID: \"4654284a-ba35-403d-84ea-1e3f1dd8bc39\") " pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" Dec 01 00:21:30 crc kubenswrapper[4846]: I1201 00:21:30.650734 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.748859 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk"] Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.749904 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.755576 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.756139 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-bq65q" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.758360 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.773392 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk"] Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.809573 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt54\" (UniqueName: \"kubernetes.io/projected/8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9-kube-api-access-2wt54\") pod \"obo-prometheus-operator-668cf9dfbb-vq7fk\" (UID: \"8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.893390 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq"] Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.894244 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.896897 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.897077 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rpnh4" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.911767 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt54\" (UniqueName: \"kubernetes.io/projected/8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9-kube-api-access-2wt54\") pod \"obo-prometheus-operator-668cf9dfbb-vq7fk\" (UID: \"8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.921781 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq"] Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.932625 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s"] Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.934638 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.944627 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s"] Dec 01 00:21:31 crc kubenswrapper[4846]: I1201 00:21:31.961597 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt54\" (UniqueName: \"kubernetes.io/projected/8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9-kube-api-access-2wt54\") pod \"obo-prometheus-operator-668cf9dfbb-vq7fk\" (UID: \"8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.013788 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e07cf8a-8c81-462f-9e58-b29e9260dc71-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s\" (UID: \"8e07cf8a-8c81-462f-9e58-b29e9260dc71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.013879 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b7c7b93-7478-44f8-ab2f-5891c64d630a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq\" (UID: \"1b7c7b93-7478-44f8-ab2f-5891c64d630a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.014048 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e07cf8a-8c81-462f-9e58-b29e9260dc71-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s\" (UID: \"8e07cf8a-8c81-462f-9e58-b29e9260dc71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.014122 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b7c7b93-7478-44f8-ab2f-5891c64d630a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq\" (UID: \"1b7c7b93-7478-44f8-ab2f-5891c64d630a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.083104 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.083790 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-d2dpt"] Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.084875 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.090260 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.090600 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-wfkgm" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.106287 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-d2dpt"] Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.118584 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b7c7b93-7478-44f8-ab2f-5891c64d630a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq\" (UID: \"1b7c7b93-7478-44f8-ab2f-5891c64d630a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.118651 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e07cf8a-8c81-462f-9e58-b29e9260dc71-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s\" (UID: \"8e07cf8a-8c81-462f-9e58-b29e9260dc71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.118701 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b7c7b93-7478-44f8-ab2f-5891c64d630a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq\" (UID: \"1b7c7b93-7478-44f8-ab2f-5891c64d630a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.119489 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e07cf8a-8c81-462f-9e58-b29e9260dc71-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s\" (UID: \"8e07cf8a-8c81-462f-9e58-b29e9260dc71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.124653 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8e07cf8a-8c81-462f-9e58-b29e9260dc71-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s\" (UID: \"8e07cf8a-8c81-462f-9e58-b29e9260dc71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.125316 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b7c7b93-7478-44f8-ab2f-5891c64d630a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq\" (UID: \"1b7c7b93-7478-44f8-ab2f-5891c64d630a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.145267 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8e07cf8a-8c81-462f-9e58-b29e9260dc71-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s\" (UID: \"8e07cf8a-8c81-462f-9e58-b29e9260dc71\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.147456 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b7c7b93-7478-44f8-ab2f-5891c64d630a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq\" (UID: \"1b7c7b93-7478-44f8-ab2f-5891c64d630a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.214460 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.220700 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d309dd9-6adf-466f-a167-b1c57b2089d4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-d2dpt\" (UID: \"4d309dd9-6adf-466f-a167-b1c57b2089d4\") " pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.220776 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klt4z\" (UniqueName: \"kubernetes.io/projected/4d309dd9-6adf-466f-a167-b1c57b2089d4-kube-api-access-klt4z\") pod \"observability-operator-d8bb48f5d-d2dpt\" (UID: \"4d309dd9-6adf-466f-a167-b1c57b2089d4\") " pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.263885 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.266916 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dv92l"] Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.267839 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.270879 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4ddzx" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.282246 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dv92l"] Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.322325 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d309dd9-6adf-466f-a167-b1c57b2089d4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-d2dpt\" (UID: \"4d309dd9-6adf-466f-a167-b1c57b2089d4\") " pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.322391 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klt4z\" (UniqueName: \"kubernetes.io/projected/4d309dd9-6adf-466f-a167-b1c57b2089d4-kube-api-access-klt4z\") pod \"observability-operator-d8bb48f5d-d2dpt\" (UID: \"4d309dd9-6adf-466f-a167-b1c57b2089d4\") " pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.327480 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/4d309dd9-6adf-466f-a167-b1c57b2089d4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-d2dpt\" (UID: \"4d309dd9-6adf-466f-a167-b1c57b2089d4\") " pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.342359 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klt4z\" (UniqueName: \"kubernetes.io/projected/4d309dd9-6adf-466f-a167-b1c57b2089d4-kube-api-access-klt4z\") pod \"observability-operator-d8bb48f5d-d2dpt\" (UID: \"4d309dd9-6adf-466f-a167-b1c57b2089d4\") " pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.346195 4846 generic.go:334] "Generic (PLEG): container finished" podID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerID="c3c77c2f8cba3ff4eadbb9d9cc095a1bc3673862457192b2bb2319fe33552e08" exitCode=0 Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.346240 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerDied","Data":"c3c77c2f8cba3ff4eadbb9d9cc095a1bc3673862457192b2bb2319fe33552e08"} Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.422276 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.423627 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56bf\" (UniqueName: \"kubernetes.io/projected/0c431140-8c85-47ad-b896-921fad1ac609-kube-api-access-d56bf\") pod \"perses-operator-5446b9c989-dv92l\" (UID: \"0c431140-8c85-47ad-b896-921fad1ac609\") " pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.423700 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c431140-8c85-47ad-b896-921fad1ac609-openshift-service-ca\") pod \"perses-operator-5446b9c989-dv92l\" (UID: \"0c431140-8c85-47ad-b896-921fad1ac609\") " pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.527305 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56bf\" (UniqueName: \"kubernetes.io/projected/0c431140-8c85-47ad-b896-921fad1ac609-kube-api-access-d56bf\") pod \"perses-operator-5446b9c989-dv92l\" (UID: \"0c431140-8c85-47ad-b896-921fad1ac609\") " pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.527370 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c431140-8c85-47ad-b896-921fad1ac609-openshift-service-ca\") pod \"perses-operator-5446b9c989-dv92l\" (UID: \"0c431140-8c85-47ad-b896-921fad1ac609\") " pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.529968 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c431140-8c85-47ad-b896-921fad1ac609-openshift-service-ca\") pod \"perses-operator-5446b9c989-dv92l\" (UID: \"0c431140-8c85-47ad-b896-921fad1ac609\") " pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.552526 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56bf\" (UniqueName: \"kubernetes.io/projected/0c431140-8c85-47ad-b896-921fad1ac609-kube-api-access-d56bf\") pod \"perses-operator-5446b9c989-dv92l\" (UID: \"0c431140-8c85-47ad-b896-921fad1ac609\") " pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.601360 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.757996 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.832163 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-utilities\") pod \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.832265 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-catalog-content\") pod \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.832417 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtc8b\" (UniqueName: \"kubernetes.io/projected/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-kube-api-access-jtc8b\") pod \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\" (UID: \"fffa7386-acd5-49bd-b82c-c1e8c6f337e4\") " Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.834172 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-utilities" (OuterVolumeSpecName: "utilities") pod "fffa7386-acd5-49bd-b82c-c1e8c6f337e4" (UID: "fffa7386-acd5-49bd-b82c-c1e8c6f337e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.842066 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-kube-api-access-jtc8b" (OuterVolumeSpecName: "kube-api-access-jtc8b") pod "fffa7386-acd5-49bd-b82c-c1e8c6f337e4" (UID: "fffa7386-acd5-49bd-b82c-c1e8c6f337e4"). InnerVolumeSpecName "kube-api-access-jtc8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.933695 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.933730 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtc8b\" (UniqueName: \"kubernetes.io/projected/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-kube-api-access-jtc8b\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:32 crc kubenswrapper[4846]: I1201 00:21:32.984912 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fffa7386-acd5-49bd-b82c-c1e8c6f337e4" (UID: "fffa7386-acd5-49bd-b82c-c1e8c6f337e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.037579 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fffa7386-acd5-49bd-b82c-c1e8c6f337e4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.181547 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq"] Dec 01 00:21:33 crc kubenswrapper[4846]: W1201 00:21:33.199519 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b7c7b93_7478_44f8_ab2f_5891c64d630a.slice/crio-061f1fb24dcea3707e5ed3f0d1080b76a76d5a7017814ed4cd7ab22bc47f2a06 WatchSource:0}: Error finding container 061f1fb24dcea3707e5ed3f0d1080b76a76d5a7017814ed4cd7ab22bc47f2a06: Status 404 returned error can't find the container with id 061f1fb24dcea3707e5ed3f0d1080b76a76d5a7017814ed4cd7ab22bc47f2a06 Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.359270 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-5dzc7"] Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.361164 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerStarted","Data":"f37c4ce3d5f7069d6c0856e55913548e8db53ec2177d5b47f2d13ab79bd1031b"} Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.363967 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" event={"ID":"1b7c7b93-7478-44f8-ab2f-5891c64d630a","Type":"ContainerStarted","Data":"061f1fb24dcea3707e5ed3f0d1080b76a76d5a7017814ed4cd7ab22bc47f2a06"} Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.369364 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kgr6q" event={"ID":"fffa7386-acd5-49bd-b82c-c1e8c6f337e4","Type":"ContainerDied","Data":"fe9b407d9faeca9f8c43aab0974d6923a3b2554b0af1a56be76f208200fbe677"} Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.369452 4846 scope.go:117] "RemoveContainer" containerID="c3c77c2f8cba3ff4eadbb9d9cc095a1bc3673862457192b2bb2319fe33552e08" Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.369493 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kgr6q" Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.373598 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk"] Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.425434 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kgr6q"] Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.432971 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kgr6q"] Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.456983 4846 scope.go:117] "RemoveContainer" containerID="569676ad1ec3dff0eabaec4adb0a1179b30563f720411adc4dad84dda338ac7c" Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.489725 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s"] Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.497788 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-d2dpt"] Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.506862 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dv92l"] Dec 01 00:21:33 crc kubenswrapper[4846]: W1201 00:21:33.515475 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e07cf8a_8c81_462f_9e58_b29e9260dc71.slice/crio-0c8f7405652568649ed52175493eb746f7af7613f2e54eedd3bac9685dd485a0 WatchSource:0}: Error finding container 0c8f7405652568649ed52175493eb746f7af7613f2e54eedd3bac9685dd485a0: Status 404 returned error can't find the container with id 0c8f7405652568649ed52175493eb746f7af7613f2e54eedd3bac9685dd485a0 Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.530361 4846 scope.go:117] "RemoveContainer" containerID="ad9c3aeea50e33f38b0dd453734a377c6e88511bf8ed60f0317455f16badb215" Dec 01 00:21:33 crc kubenswrapper[4846]: I1201 00:21:33.587486 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" path="/var/lib/kubelet/pods/fffa7386-acd5-49bd-b82c-c1e8c6f337e4/volumes" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.063187 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-56b5c6c555-q9msd"] Dec 01 00:21:34 crc kubenswrapper[4846]: E1201 00:21:34.063472 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="extract-content" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.063489 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="extract-content" Dec 01 00:21:34 crc kubenswrapper[4846]: E1201 00:21:34.063505 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="registry-server" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.063513 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="registry-server" Dec 01 00:21:34 crc kubenswrapper[4846]: E1201 00:21:34.063536 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="extract-utilities" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.063546 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="extract-utilities" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.063668 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="fffa7386-acd5-49bd-b82c-c1e8c6f337e4" containerName="registry-server" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.064236 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.076031 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.078314 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-fmkkb" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.086377 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-56b5c6c555-q9msd"] Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.159120 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d05afe4-43b5-4fba-a245-c913a20a5301-apiservice-cert\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.159170 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d05afe4-43b5-4fba-a245-c913a20a5301-webhook-cert\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.159207 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gss5r\" (UniqueName: \"kubernetes.io/projected/3d05afe4-43b5-4fba-a245-c913a20a5301-kube-api-access-gss5r\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.261078 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d05afe4-43b5-4fba-a245-c913a20a5301-apiservice-cert\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.261134 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d05afe4-43b5-4fba-a245-c913a20a5301-webhook-cert\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.261180 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gss5r\" (UniqueName: \"kubernetes.io/projected/3d05afe4-43b5-4fba-a245-c913a20a5301-kube-api-access-gss5r\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.271872 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d05afe4-43b5-4fba-a245-c913a20a5301-apiservice-cert\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.271878 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d05afe4-43b5-4fba-a245-c913a20a5301-webhook-cert\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.285603 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gss5r\" (UniqueName: \"kubernetes.io/projected/3d05afe4-43b5-4fba-a245-c913a20a5301-kube-api-access-gss5r\") pod \"elastic-operator-56b5c6c555-q9msd\" (UID: \"3d05afe4-43b5-4fba-a245-c913a20a5301\") " pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.376711 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" event={"ID":"4d309dd9-6adf-466f-a167-b1c57b2089d4","Type":"ContainerStarted","Data":"c7dc3c2e00da0f4736627de0f7be8d93f9f2d251cbd9f89e9c69b827d38e9dc5"} Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.380079 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" event={"ID":"8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9","Type":"ContainerStarted","Data":"5a6408ee1793126bf2529d4fa2bdf3ed69df99201ce7ac2a18c4bfd0f559b438"} Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.381572 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-dv92l" event={"ID":"0c431140-8c85-47ad-b896-921fad1ac609","Type":"ContainerStarted","Data":"474ec63643534e3e8f68acac53c4c3913bcbf858e604b62c92938e520cab6fee"} Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.382290 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.382907 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" event={"ID":"8e07cf8a-8c81-462f-9e58-b29e9260dc71","Type":"ContainerStarted","Data":"0c8f7405652568649ed52175493eb746f7af7613f2e54eedd3bac9685dd485a0"} Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.385385 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" event={"ID":"4654284a-ba35-403d-84ea-1e3f1dd8bc39","Type":"ContainerStarted","Data":"feb380be7d6a756e4f3055115b19fecfc643f3b15b3d4e63ab6e69024c9955bd"} Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.388094 4846 generic.go:334] "Generic (PLEG): container finished" podID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerID="f37c4ce3d5f7069d6c0856e55913548e8db53ec2177d5b47f2d13ab79bd1031b" exitCode=0 Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.388147 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerDied","Data":"f37c4ce3d5f7069d6c0856e55913548e8db53ec2177d5b47f2d13ab79bd1031b"} Dec 01 00:21:34 crc kubenswrapper[4846]: I1201 00:21:34.705970 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-56b5c6c555-q9msd"] Dec 01 00:21:35 crc kubenswrapper[4846]: I1201 00:21:35.408134 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" event={"ID":"3d05afe4-43b5-4fba-a245-c913a20a5301","Type":"ContainerStarted","Data":"b3d6724fb056d6531161758a08e11eb4db4a0fe3b453a345ddeaeb2ad129802b"} Dec 01 00:21:35 crc kubenswrapper[4846]: I1201 00:21:35.430324 4846 generic.go:334] "Generic (PLEG): container finished" podID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerID="564e594f7f25f835fbaba109c3f08868603deab1697a119df9d33a613880c621" exitCode=0 Dec 01 00:21:35 crc kubenswrapper[4846]: I1201 00:21:35.430375 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerDied","Data":"564e594f7f25f835fbaba109c3f08868603deab1697a119df9d33a613880c621"} Dec 01 00:21:36 crc kubenswrapper[4846]: I1201 00:21:36.958634 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.018940 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-util\") pod \"018280e5-7f09-4ed8-81b4-0c26013fa732\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.019053 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-bundle\") pod \"018280e5-7f09-4ed8-81b4-0c26013fa732\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.019110 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2nvj\" (UniqueName: \"kubernetes.io/projected/018280e5-7f09-4ed8-81b4-0c26013fa732-kube-api-access-b2nvj\") pod \"018280e5-7f09-4ed8-81b4-0c26013fa732\" (UID: \"018280e5-7f09-4ed8-81b4-0c26013fa732\") " Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.021823 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-bundle" (OuterVolumeSpecName: "bundle") pod "018280e5-7f09-4ed8-81b4-0c26013fa732" (UID: "018280e5-7f09-4ed8-81b4-0c26013fa732"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.036695 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-util" (OuterVolumeSpecName: "util") pod "018280e5-7f09-4ed8-81b4-0c26013fa732" (UID: "018280e5-7f09-4ed8-81b4-0c26013fa732"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.040910 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018280e5-7f09-4ed8-81b4-0c26013fa732-kube-api-access-b2nvj" (OuterVolumeSpecName: "kube-api-access-b2nvj") pod "018280e5-7f09-4ed8-81b4-0c26013fa732" (UID: "018280e5-7f09-4ed8-81b4-0c26013fa732"). InnerVolumeSpecName "kube-api-access-b2nvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.120297 4846 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-util\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.120339 4846 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/018280e5-7f09-4ed8-81b4-0c26013fa732-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.120353 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2nvj\" (UniqueName: \"kubernetes.io/projected/018280e5-7f09-4ed8-81b4-0c26013fa732-kube-api-access-b2nvj\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.469072 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" event={"ID":"018280e5-7f09-4ed8-81b4-0c26013fa732","Type":"ContainerDied","Data":"10c3cb3598ff8e921ee96c6e40ad63ae4b21abeadb1478933598d4d68594f184"} Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.469114 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c3cb3598ff8e921ee96c6e40ad63ae4b21abeadb1478933598d4d68594f184" Dec 01 00:21:37 crc kubenswrapper[4846]: I1201 00:21:37.469173 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7" Dec 01 00:21:39 crc kubenswrapper[4846]: I1201 00:21:39.166591 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:39 crc kubenswrapper[4846]: I1201 00:21:39.291046 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:42 crc kubenswrapper[4846]: I1201 00:21:42.534286 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdk9q"] Dec 01 00:21:42 crc kubenswrapper[4846]: I1201 00:21:42.535004 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdk9q" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="registry-server" containerID="cri-o://3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890" gracePeriod=2 Dec 01 00:21:43 crc kubenswrapper[4846]: I1201 00:21:43.533825 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerDied","Data":"3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890"} Dec 01 00:21:43 crc kubenswrapper[4846]: I1201 00:21:43.533953 4846 generic.go:334] "Generic (PLEG): container finished" podID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerID="3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890" exitCode=0 Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.064834 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890 is running failed: container process not found" containerID="3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.065727 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890 is running failed: container process not found" containerID="3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.066187 4846 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890 is running failed: container process not found" containerID="3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890" cmd=["grpc_health_probe","-addr=:50051"] Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.066222 4846 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rdk9q" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="registry-server" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.394306 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl"] Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.394712 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="pull" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.394733 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="pull" Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.394785 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="util" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.394795 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="util" Dec 01 00:21:49 crc kubenswrapper[4846]: E1201 00:21:49.394807 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="extract" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.394818 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="extract" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.394967 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="018280e5-7f09-4ed8-81b4-0c26013fa732" containerName="extract" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.395505 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.398524 4846 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-kzvrx" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.398672 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.398752 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.418032 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl"] Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.498812 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gc44\" (UniqueName: \"kubernetes.io/projected/b50c3f0d-eea0-4633-8fe7-fe105e3e5c52-kube-api-access-4gc44\") pod \"cert-manager-operator-controller-manager-5446d6888b-f44bl\" (UID: \"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.498884 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b50c3f0d-eea0-4633-8fe7-fe105e3e5c52-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-f44bl\" (UID: \"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.601431 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gc44\" (UniqueName: \"kubernetes.io/projected/b50c3f0d-eea0-4633-8fe7-fe105e3e5c52-kube-api-access-4gc44\") pod \"cert-manager-operator-controller-manager-5446d6888b-f44bl\" (UID: \"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.601666 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b50c3f0d-eea0-4633-8fe7-fe105e3e5c52-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-f44bl\" (UID: \"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.602178 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b50c3f0d-eea0-4633-8fe7-fe105e3e5c52-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-f44bl\" (UID: \"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.624568 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gc44\" (UniqueName: \"kubernetes.io/projected/b50c3f0d-eea0-4633-8fe7-fe105e3e5c52-kube-api-access-4gc44\") pod \"cert-manager-operator-controller-manager-5446d6888b-f44bl\" (UID: \"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:49 crc kubenswrapper[4846]: I1201 00:21:49.722236 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" Dec 01 00:21:55 crc kubenswrapper[4846]: I1201 00:21:55.419415 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:21:55 crc kubenswrapper[4846]: I1201 00:21:55.419989 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:21:57 crc kubenswrapper[4846]: E1201 00:21:57.788235 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Dec 01 00:21:57 crc kubenswrapper[4846]: E1201 00:21:57.788753 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tktfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-5dzc7_service-telemetry(4654284a-ba35-403d-84ea-1e3f1dd8bc39): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:21:57 crc kubenswrapper[4846]: E1201 00:21:57.790871 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" podUID="4654284a-ba35-403d-84ea-1e3f1dd8bc39" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.481516 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.481818 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s_openshift-operators(8e07cf8a-8c81-462f-9e58-b29e9260dc71): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.483656 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" podUID="8e07cf8a-8c81-462f-9e58-b29e9260dc71" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.519100 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.521670 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.521875 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq_openshift-operators(1b7c7b93-7478-44f8-ab2f-5891c64d630a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.523589 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" podUID="1b7c7b93-7478-44f8-ab2f-5891c64d630a" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.644552 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtr8\" (UniqueName: \"kubernetes.io/projected/9a0bd765-e41c-4343-bf55-ccd424ee75d3-kube-api-access-xvtr8\") pod \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.644713 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-utilities\") pod \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.644826 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-catalog-content\") pod \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\" (UID: \"9a0bd765-e41c-4343-bf55-ccd424ee75d3\") " Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.652747 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-utilities" (OuterVolumeSpecName: "utilities") pod "9a0bd765-e41c-4343-bf55-ccd424ee75d3" (UID: "9a0bd765-e41c-4343-bf55-ccd424ee75d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.664059 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0bd765-e41c-4343-bf55-ccd424ee75d3-kube-api-access-xvtr8" (OuterVolumeSpecName: "kube-api-access-xvtr8") pod "9a0bd765-e41c-4343-bf55-ccd424ee75d3" (UID: "9a0bd765-e41c-4343-bf55-ccd424ee75d3"). InnerVolumeSpecName "kube-api-access-xvtr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.699444 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdk9q" event={"ID":"9a0bd765-e41c-4343-bf55-ccd424ee75d3","Type":"ContainerDied","Data":"9e8562b00001e3f1bd3e429396aad09288655a813b8efe1889adcd9b99b72e86"} Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.699532 4846 scope.go:117] "RemoveContainer" containerID="3a82553830361662f7c1d435d4368698f54b7c1b03a5f0b11b9fe5963996e890" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.699727 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdk9q" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.705819 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" podUID="8e07cf8a-8c81-462f-9e58-b29e9260dc71" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.705936 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" podUID="4654284a-ba35-403d-84ea-1e3f1dd8bc39" Dec 01 00:21:58 crc kubenswrapper[4846]: E1201 00:21:58.706370 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" podUID="1b7c7b93-7478-44f8-ab2f-5891c64d630a" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.708708 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a0bd765-e41c-4343-bf55-ccd424ee75d3" (UID: "9a0bd765-e41c-4343-bf55-ccd424ee75d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.747037 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtr8\" (UniqueName: \"kubernetes.io/projected/9a0bd765-e41c-4343-bf55-ccd424ee75d3-kube-api-access-xvtr8\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.747078 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:58 crc kubenswrapper[4846]: I1201 00:21:58.747091 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0bd765-e41c-4343-bf55-ccd424ee75d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:21:59 crc kubenswrapper[4846]: I1201 00:21:59.038561 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdk9q"] Dec 01 00:21:59 crc kubenswrapper[4846]: I1201 00:21:59.043792 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdk9q"] Dec 01 00:21:59 crc kubenswrapper[4846]: I1201 00:21:59.608048 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" path="/var/lib/kubelet/pods/9a0bd765-e41c-4343-bf55-ccd424ee75d3/volumes" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.218058 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.219318 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klt4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-d2dpt_openshift-operators(4d309dd9-6adf-466f-a167-b1c57b2089d4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.220726 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" podUID="4d309dd9-6adf-466f-a167-b1c57b2089d4" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.734773 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" podUID="4d309dd9-6adf-466f-a167-b1c57b2089d4" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.943738 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.944085 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wt54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-vq7fk_openshift-operators(8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:22:01 crc kubenswrapper[4846]: E1201 00:22:01.945369 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" podUID="8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9" Dec 01 00:22:02 crc kubenswrapper[4846]: I1201 00:22:02.706182 4846 scope.go:117] "RemoveContainer" containerID="e0639ef932293b7bc94ae355b088e4bd85b65ff5bb46929090e3b8b3635fb0a2" Dec 01 00:22:02 crc kubenswrapper[4846]: E1201 00:22:02.711190 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Dec 01 00:22:02 crc kubenswrapper[4846]: E1201 00:22:02.711412 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d56bf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-dv92l_openshift-operators(0c431140-8c85-47ad-b896-921fad1ac609): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:22:02 crc kubenswrapper[4846]: E1201 00:22:02.712667 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-dv92l" podUID="0c431140-8c85-47ad-b896-921fad1ac609" Dec 01 00:22:02 crc kubenswrapper[4846]: E1201 00:22:02.750244 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" podUID="8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9" Dec 01 00:22:02 crc kubenswrapper[4846]: E1201 00:22:02.751131 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-dv92l" podUID="0c431140-8c85-47ad-b896-921fad1ac609" Dec 01 00:22:02 crc kubenswrapper[4846]: I1201 00:22:02.802240 4846 scope.go:117] "RemoveContainer" containerID="9ef489707006f60146c1c170e07cd1bb906dfe320628d41e020b0d8e43fdf314" Dec 01 00:22:02 crc kubenswrapper[4846]: I1201 00:22:02.966863 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl"] Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.756368 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" event={"ID":"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52","Type":"ContainerStarted","Data":"cd4a420134cf99f402b4b6c399e067bc30af4bfbddad0267baafae122819e82e"} Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.758068 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" event={"ID":"3d05afe4-43b5-4fba-a245-c913a20a5301","Type":"ContainerStarted","Data":"12d21435413dcda47134b647fd39307a3201161c2567d65ab783d2798374abe7"} Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.803273 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-56b5c6c555-q9msd" podStartSLOduration=1.8552216879999999 podStartE2EDuration="29.803248008s" podCreationTimestamp="2025-12-01 00:21:34 +0000 UTC" firstStartedPulling="2025-12-01 00:21:34.73651276 +0000 UTC m=+915.517281834" lastFinishedPulling="2025-12-01 00:22:02.68453908 +0000 UTC m=+943.465308154" observedRunningTime="2025-12-01 00:22:03.799887073 +0000 UTC m=+944.580656157" watchObservedRunningTime="2025-12-01 00:22:03.803248008 +0000 UTC m=+944.584017082" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.977019 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 01 00:22:03 crc kubenswrapper[4846]: E1201 00:22:03.977236 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="registry-server" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.977249 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="registry-server" Dec 01 00:22:03 crc kubenswrapper[4846]: E1201 00:22:03.977261 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="extract-content" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.977268 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="extract-content" Dec 01 00:22:03 crc kubenswrapper[4846]: E1201 00:22:03.977282 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="extract-utilities" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.977291 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="extract-utilities" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.977386 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0bd765-e41c-4343-bf55-ccd424ee75d3" containerName="registry-server" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.978203 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.980081 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.980533 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-cg2lv" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.980889 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.981086 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.981133 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.981086 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.981540 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.983098 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 01 00:22:03 crc kubenswrapper[4846]: I1201 00:22:03.987783 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.003206 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027395 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027468 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027650 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027773 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027802 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027829 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027874 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027906 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.027999 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.028081 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.028195 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.028322 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.028362 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.028414 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/727053d3-2d7a-4a4b-8a99-3762f38c8344-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.028465 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136625 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136719 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136755 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136788 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136813 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136834 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136854 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136876 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136920 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136951 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.136975 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.137006 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.137033 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.137057 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/727053d3-2d7a-4a4b-8a99-3762f38c8344-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.137081 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.137470 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.138449 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.139969 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.140148 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.140465 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.141974 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.145820 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.145988 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.149249 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.149718 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.149985 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.150276 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.150521 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.151232 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/727053d3-2d7a-4a4b-8a99-3762f38c8344-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.170573 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/727053d3-2d7a-4a4b-8a99-3762f38c8344-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"727053d3-2d7a-4a4b-8a99-3762f38c8344\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.300409 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.539974 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 01 00:22:04 crc kubenswrapper[4846]: W1201 00:22:04.551014 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727053d3_2d7a_4a4b_8a99_3762f38c8344.slice/crio-1cd1d0bafc2b735d28cbd3b04a8f42af415d60b9350c85478cc3890537592447 WatchSource:0}: Error finding container 1cd1d0bafc2b735d28cbd3b04a8f42af415d60b9350c85478cc3890537592447: Status 404 returned error can't find the container with id 1cd1d0bafc2b735d28cbd3b04a8f42af415d60b9350c85478cc3890537592447 Dec 01 00:22:04 crc kubenswrapper[4846]: I1201 00:22:04.773249 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"727053d3-2d7a-4a4b-8a99-3762f38c8344","Type":"ContainerStarted","Data":"1cd1d0bafc2b735d28cbd3b04a8f42af415d60b9350c85478cc3890537592447"} Dec 01 00:22:06 crc kubenswrapper[4846]: I1201 00:22:06.794115 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" event={"ID":"b50c3f0d-eea0-4633-8fe7-fe105e3e5c52","Type":"ContainerStarted","Data":"baca176b83750e4aec5bd55f0c7362e42c89f6db7a2ab12756d56acd1c913eec"} Dec 01 00:22:06 crc kubenswrapper[4846]: I1201 00:22:06.824677 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-f44bl" podStartSLOduration=15.062200756 podStartE2EDuration="17.824649157s" podCreationTimestamp="2025-12-01 00:21:49 +0000 UTC" firstStartedPulling="2025-12-01 00:22:02.974769801 +0000 UTC m=+943.755538875" lastFinishedPulling="2025-12-01 00:22:05.737218202 +0000 UTC m=+946.517987276" observedRunningTime="2025-12-01 00:22:06.820960341 +0000 UTC m=+947.601729435" watchObservedRunningTime="2025-12-01 00:22:06.824649157 +0000 UTC m=+947.605418221" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.339977 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jkskj"] Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.342111 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.346639 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.348762 4846 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-c4k85" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.349121 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.356371 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jkskj\" (UID: \"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.356418 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqc9q\" (UniqueName: \"kubernetes.io/projected/6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4-kube-api-access-cqc9q\") pod \"cert-manager-cainjector-855d9ccff4-jkskj\" (UID: \"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.458126 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jkskj\" (UID: \"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.458197 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqc9q\" (UniqueName: \"kubernetes.io/projected/6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4-kube-api-access-cqc9q\") pod \"cert-manager-cainjector-855d9ccff4-jkskj\" (UID: \"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.465160 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jkskj"] Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.501079 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqc9q\" (UniqueName: \"kubernetes.io/projected/6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4-kube-api-access-cqc9q\") pod \"cert-manager-cainjector-855d9ccff4-jkskj\" (UID: \"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.515798 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jkskj\" (UID: \"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.726714 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.846196 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" event={"ID":"8e07cf8a-8c81-462f-9e58-b29e9260dc71","Type":"ContainerStarted","Data":"28624793aa1e95d9235de00529239da66eea2fd0b901340cf055ce31f2ba1f96"} Dec 01 00:22:11 crc kubenswrapper[4846]: I1201 00:22:11.962758 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s" podStartSLOduration=3.646376425 podStartE2EDuration="40.962730913s" podCreationTimestamp="2025-12-01 00:21:31 +0000 UTC" firstStartedPulling="2025-12-01 00:21:33.519812724 +0000 UTC m=+914.300581798" lastFinishedPulling="2025-12-01 00:22:10.836167212 +0000 UTC m=+951.616936286" observedRunningTime="2025-12-01 00:22:11.957927544 +0000 UTC m=+952.738696618" watchObservedRunningTime="2025-12-01 00:22:11.962730913 +0000 UTC m=+952.743499997" Dec 01 00:22:12 crc kubenswrapper[4846]: I1201 00:22:12.864493 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jkskj"] Dec 01 00:22:12 crc kubenswrapper[4846]: I1201 00:22:12.867361 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" event={"ID":"4654284a-ba35-403d-84ea-1e3f1dd8bc39","Type":"ContainerStarted","Data":"e687f7aabfb73b84c78e8d7c46d160e7ac1d05141bd32c8f0c6a5a182944313d"} Dec 01 00:22:12 crc kubenswrapper[4846]: I1201 00:22:12.893855 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-5dzc7" podStartSLOduration=4.5024932 podStartE2EDuration="42.893833694s" podCreationTimestamp="2025-12-01 00:21:30 +0000 UTC" firstStartedPulling="2025-12-01 00:21:33.351554069 +0000 UTC m=+914.132323143" lastFinishedPulling="2025-12-01 00:22:11.742894573 +0000 UTC m=+952.523663637" observedRunningTime="2025-12-01 00:22:12.890394537 +0000 UTC m=+953.671163611" watchObservedRunningTime="2025-12-01 00:22:12.893833694 +0000 UTC m=+953.674602768" Dec 01 00:22:12 crc kubenswrapper[4846]: W1201 00:22:12.924261 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a8cb7d0_8968_4cfe_a4f6_89d488b15dc4.slice/crio-3608b6b8f5f39a56c253a4e10be6de1fbd0d90183bba731a330790b140a10981 WatchSource:0}: Error finding container 3608b6b8f5f39a56c253a4e10be6de1fbd0d90183bba731a330790b140a10981: Status 404 returned error can't find the container with id 3608b6b8f5f39a56c253a4e10be6de1fbd0d90183bba731a330790b140a10981 Dec 01 00:22:13 crc kubenswrapper[4846]: I1201 00:22:13.877445 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" event={"ID":"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4","Type":"ContainerStarted","Data":"3608b6b8f5f39a56c253a4e10be6de1fbd0d90183bba731a330790b140a10981"} Dec 01 00:22:13 crc kubenswrapper[4846]: I1201 00:22:13.879399 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" event={"ID":"1b7c7b93-7478-44f8-ab2f-5891c64d630a","Type":"ContainerStarted","Data":"d89ad86f582d0a49549a4f8480ebd5632e05e48abaac001e9a6de49e1a68c677"} Dec 01 00:22:14 crc kubenswrapper[4846]: I1201 00:22:14.512390 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq" podStartSLOduration=-9223371993.342403 podStartE2EDuration="43.512371594s" podCreationTimestamp="2025-12-01 00:21:31 +0000 UTC" firstStartedPulling="2025-12-01 00:21:33.20310453 +0000 UTC m=+913.983873604" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:22:14.509812573 +0000 UTC m=+955.290581667" watchObservedRunningTime="2025-12-01 00:22:14.512371594 +0000 UTC m=+955.293140668" Dec 01 00:22:14 crc kubenswrapper[4846]: I1201 00:22:14.920251 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-4plm6"] Dec 01 00:22:14 crc kubenswrapper[4846]: I1201 00:22:14.921211 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:14 crc kubenswrapper[4846]: I1201 00:22:14.923924 4846 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hnhv7" Dec 01 00:22:14 crc kubenswrapper[4846]: I1201 00:22:14.939759 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-4plm6"] Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.074646 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmfmk\" (UniqueName: \"kubernetes.io/projected/0c7bbe0f-b67d-4752-9d38-76d426929bfa-kube-api-access-wmfmk\") pod \"cert-manager-webhook-f4fb5df64-4plm6\" (UID: \"0c7bbe0f-b67d-4752-9d38-76d426929bfa\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.074807 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0c7bbe0f-b67d-4752-9d38-76d426929bfa-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-4plm6\" (UID: \"0c7bbe0f-b67d-4752-9d38-76d426929bfa\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.176601 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0c7bbe0f-b67d-4752-9d38-76d426929bfa-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-4plm6\" (UID: \"0c7bbe0f-b67d-4752-9d38-76d426929bfa\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.176714 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmfmk\" (UniqueName: \"kubernetes.io/projected/0c7bbe0f-b67d-4752-9d38-76d426929bfa-kube-api-access-wmfmk\") pod \"cert-manager-webhook-f4fb5df64-4plm6\" (UID: \"0c7bbe0f-b67d-4752-9d38-76d426929bfa\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.200753 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0c7bbe0f-b67d-4752-9d38-76d426929bfa-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-4plm6\" (UID: \"0c7bbe0f-b67d-4752-9d38-76d426929bfa\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.201377 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmfmk\" (UniqueName: \"kubernetes.io/projected/0c7bbe0f-b67d-4752-9d38-76d426929bfa-kube-api-access-wmfmk\") pod \"cert-manager-webhook-f4fb5df64-4plm6\" (UID: \"0c7bbe0f-b67d-4752-9d38-76d426929bfa\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.244950 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:15 crc kubenswrapper[4846]: I1201 00:22:15.869816 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-4plm6"] Dec 01 00:22:15 crc kubenswrapper[4846]: W1201 00:22:15.891363 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c7bbe0f_b67d_4752_9d38_76d426929bfa.slice/crio-d225c0ab4784e2b61148b253097f33939a62f70dc770d98162dd8e9491624f98 WatchSource:0}: Error finding container d225c0ab4784e2b61148b253097f33939a62f70dc770d98162dd8e9491624f98: Status 404 returned error can't find the container with id d225c0ab4784e2b61148b253097f33939a62f70dc770d98162dd8e9491624f98 Dec 01 00:22:16 crc kubenswrapper[4846]: I1201 00:22:16.936510 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" event={"ID":"0c7bbe0f-b67d-4752-9d38-76d426929bfa","Type":"ContainerStarted","Data":"d225c0ab4784e2b61148b253097f33939a62f70dc770d98162dd8e9491624f98"} Dec 01 00:22:18 crc kubenswrapper[4846]: I1201 00:22:18.033497 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" event={"ID":"4d309dd9-6adf-466f-a167-b1c57b2089d4","Type":"ContainerStarted","Data":"d1e2abca6e4da3e9fed06b1e44c2e58d1564af3aa70484ded20e421c847904b6"} Dec 01 00:22:18 crc kubenswrapper[4846]: I1201 00:22:18.035632 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:22:18 crc kubenswrapper[4846]: I1201 00:22:18.058320 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" Dec 01 00:22:18 crc kubenswrapper[4846]: I1201 00:22:18.108162 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-d2dpt" podStartSLOduration=2.736283023 podStartE2EDuration="46.108142802s" podCreationTimestamp="2025-12-01 00:21:32 +0000 UTC" firstStartedPulling="2025-12-01 00:21:33.519535476 +0000 UTC m=+914.300304560" lastFinishedPulling="2025-12-01 00:22:16.891395265 +0000 UTC m=+957.672164339" observedRunningTime="2025-12-01 00:22:18.107102551 +0000 UTC m=+958.887871625" watchObservedRunningTime="2025-12-01 00:22:18.108142802 +0000 UTC m=+958.888911876" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.079199 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.080915 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.088055 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.088185 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.088330 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.094380 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.119181 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265467 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265576 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px862\" (UniqueName: \"kubernetes.io/projected/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-kube-api-access-px862\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265627 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265702 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265737 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265762 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265778 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265793 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265812 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.265836 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.266011 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.266038 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367165 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367229 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367270 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367299 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367322 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367348 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367371 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367397 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367432 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367463 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367493 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.367532 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px862\" (UniqueName: \"kubernetes.io/projected/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-kube-api-access-px862\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.368000 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.369232 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.369755 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.369887 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.370099 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.370169 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.370371 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.370654 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.370988 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.376695 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.394907 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.412621 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px862\" (UniqueName: \"kubernetes.io/projected/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-kube-api-access-px862\") pod \"service-telemetry-operator-1-build\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:22 crc kubenswrapper[4846]: I1201 00:22:22.708199 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:25 crc kubenswrapper[4846]: I1201 00:22:25.420094 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:22:25 crc kubenswrapper[4846]: I1201 00:22:25.420471 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.446471 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ztlvf"] Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.447564 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.452720 4846 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rrhnw" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.459130 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ztlvf"] Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.546992 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5j4v\" (UniqueName: \"kubernetes.io/projected/bcd8f297-7a23-41d8-b71f-eb10e7f7ead3-kube-api-access-z5j4v\") pod \"cert-manager-86cb77c54b-ztlvf\" (UID: \"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3\") " pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.547128 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd8f297-7a23-41d8-b71f-eb10e7f7ead3-bound-sa-token\") pod \"cert-manager-86cb77c54b-ztlvf\" (UID: \"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3\") " pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.648890 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd8f297-7a23-41d8-b71f-eb10e7f7ead3-bound-sa-token\") pod \"cert-manager-86cb77c54b-ztlvf\" (UID: \"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3\") " pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.648984 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5j4v\" (UniqueName: \"kubernetes.io/projected/bcd8f297-7a23-41d8-b71f-eb10e7f7ead3-kube-api-access-z5j4v\") pod \"cert-manager-86cb77c54b-ztlvf\" (UID: \"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3\") " pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.675754 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5j4v\" (UniqueName: \"kubernetes.io/projected/bcd8f297-7a23-41d8-b71f-eb10e7f7ead3-kube-api-access-z5j4v\") pod \"cert-manager-86cb77c54b-ztlvf\" (UID: \"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3\") " pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.688554 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd8f297-7a23-41d8-b71f-eb10e7f7ead3-bound-sa-token\") pod \"cert-manager-86cb77c54b-ztlvf\" (UID: \"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3\") " pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:28 crc kubenswrapper[4846]: I1201 00:22:28.772154 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-ztlvf" Dec 01 00:22:31 crc kubenswrapper[4846]: I1201 00:22:31.848047 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.473808 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.475245 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.483983 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.508155 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.508155 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.508232 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521144 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521184 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521200 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521220 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521287 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521314 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521338 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521366 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nh4t\" (UniqueName: \"kubernetes.io/projected/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-kube-api-access-4nh4t\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521402 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521432 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521453 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.521492 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.622834 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623087 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623222 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623329 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623370 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623441 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623473 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623500 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623547 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nh4t\" (UniqueName: \"kubernetes.io/projected/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-kube-api-access-4nh4t\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623607 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623658 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623711 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623783 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.623898 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.624652 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.624799 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.624929 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.625137 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.625383 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.625494 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.625729 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.631146 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.631581 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.647641 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nh4t\" (UniqueName: \"kubernetes.io/projected/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-kube-api-access-4nh4t\") pod \"service-telemetry-operator-2-build\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:33 crc kubenswrapper[4846]: I1201 00:22:33.832371 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.113742 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.113945 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqc9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-855d9ccff4-jkskj_cert-manager(6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.115229 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" podUID="6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.271779 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.272044 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmfmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-4plm6_cert-manager(0c7bbe0f-b67d-4752-9d38-76d426929bfa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.273215 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" podUID="0c7bbe0f-b67d-4752-9d38-76d426929bfa" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.335041 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" podUID="0c7bbe0f-b67d-4752-9d38-76d426929bfa" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.335042 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" podUID="6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.429577 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.430230 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(727053d3-2d7a-4a4b-8a99-3762f38c8344): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 00:22:37 crc kubenswrapper[4846]: E1201 00:22:37.431518 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="727053d3-2d7a-4a4b-8a99-3762f38c8344" Dec 01 00:22:37 crc kubenswrapper[4846]: I1201 00:22:37.761461 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 01 00:22:37 crc kubenswrapper[4846]: W1201 00:22:37.877267 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57dcb580_ffb2_4ab6_a4b6_7759a5c1e4cd.slice/crio-b74f7a8da66523372aef5aec5928ecb4c8e592038d2ac11fda09dce39eb7bf71 WatchSource:0}: Error finding container b74f7a8da66523372aef5aec5928ecb4c8e592038d2ac11fda09dce39eb7bf71: Status 404 returned error can't find the container with id b74f7a8da66523372aef5aec5928ecb4c8e592038d2ac11fda09dce39eb7bf71 Dec 01 00:22:37 crc kubenswrapper[4846]: I1201 00:22:37.926312 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.265742 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ztlvf"] Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.338900 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerStarted","Data":"dff79e5b7d696146370c22b1bc039b1d479767853ccc7feddf1f90b9b3aa7238"} Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.341071 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd","Type":"ContainerStarted","Data":"b74f7a8da66523372aef5aec5928ecb4c8e592038d2ac11fda09dce39eb7bf71"} Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.342304 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" event={"ID":"8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9","Type":"ContainerStarted","Data":"b641166419797eb88b435cf07a515b399811699569db2ab9699d1f1d8de22c8c"} Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.344318 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-dv92l" event={"ID":"0c431140-8c85-47ad-b896-921fad1ac609","Type":"ContainerStarted","Data":"711673a786266bdebc478dac6ce133a5c72ecdd2036d265160334b30b1fc3044"} Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.344494 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.345473 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-ztlvf" event={"ID":"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3","Type":"ContainerStarted","Data":"3159c68331ccf9e041cf82e216bf26695b20a0143fc6b5b79dd813f308df4abe"} Dec 01 00:22:38 crc kubenswrapper[4846]: E1201 00:22:38.346210 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="727053d3-2d7a-4a4b-8a99-3762f38c8344" Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.362261 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-vq7fk" podStartSLOduration=3.4005492569999998 podStartE2EDuration="1m7.362238831s" podCreationTimestamp="2025-12-01 00:21:31 +0000 UTC" firstStartedPulling="2025-12-01 00:21:33.373862303 +0000 UTC m=+914.154631377" lastFinishedPulling="2025-12-01 00:22:37.335551877 +0000 UTC m=+978.116320951" observedRunningTime="2025-12-01 00:22:38.359975881 +0000 UTC m=+979.140744975" watchObservedRunningTime="2025-12-01 00:22:38.362238831 +0000 UTC m=+979.143007915" Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.378515 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-dv92l" podStartSLOduration=2.561605939 podStartE2EDuration="1m6.378498167s" podCreationTimestamp="2025-12-01 00:21:32 +0000 UTC" firstStartedPulling="2025-12-01 00:21:33.519156154 +0000 UTC m=+914.299925228" lastFinishedPulling="2025-12-01 00:22:37.336048382 +0000 UTC m=+978.116817456" observedRunningTime="2025-12-01 00:22:38.375929158 +0000 UTC m=+979.156698232" watchObservedRunningTime="2025-12-01 00:22:38.378498167 +0000 UTC m=+979.159267231" Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.552890 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 01 00:22:38 crc kubenswrapper[4846]: I1201 00:22:38.592594 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 01 00:22:39 crc kubenswrapper[4846]: E1201 00:22:39.359463 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="727053d3-2d7a-4a4b-8a99-3762f38c8344" Dec 01 00:22:40 crc kubenswrapper[4846]: E1201 00:22:40.364015 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="727053d3-2d7a-4a4b-8a99-3762f38c8344" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.112971 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p4rtr"] Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.116796 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.135821 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p4rtr"] Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.311572 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lld9h\" (UniqueName: \"kubernetes.io/projected/1912a951-9d01-457f-8f22-e1f954f8a699-kube-api-access-lld9h\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.311843 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-utilities\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.312107 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-catalog-content\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.371766 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-ztlvf" event={"ID":"bcd8f297-7a23-41d8-b71f-eb10e7f7ead3","Type":"ContainerStarted","Data":"6855c5a5687937226e454ec23d5fe41f0c89bb1e77c6cbfca66d85beb44be082"} Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.413571 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-utilities\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.413650 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-catalog-content\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.413736 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lld9h\" (UniqueName: \"kubernetes.io/projected/1912a951-9d01-457f-8f22-e1f954f8a699-kube-api-access-lld9h\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.414180 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-utilities\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.414582 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-catalog-content\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.453588 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lld9h\" (UniqueName: \"kubernetes.io/projected/1912a951-9d01-457f-8f22-e1f954f8a699-kube-api-access-lld9h\") pod \"community-operators-p4rtr\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:41 crc kubenswrapper[4846]: I1201 00:22:41.750805 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:42 crc kubenswrapper[4846]: I1201 00:22:42.603567 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-dv92l" Dec 01 00:22:42 crc kubenswrapper[4846]: I1201 00:22:42.630305 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-ztlvf" podStartSLOduration=13.255639257 podStartE2EDuration="14.630286008s" podCreationTimestamp="2025-12-01 00:22:28 +0000 UTC" firstStartedPulling="2025-12-01 00:22:38.276017628 +0000 UTC m=+979.056786702" lastFinishedPulling="2025-12-01 00:22:39.650664379 +0000 UTC m=+980.431433453" observedRunningTime="2025-12-01 00:22:41.398331817 +0000 UTC m=+982.179100901" watchObservedRunningTime="2025-12-01 00:22:42.630286008 +0000 UTC m=+983.411055082" Dec 01 00:22:45 crc kubenswrapper[4846]: I1201 00:22:45.275844 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p4rtr"] Dec 01 00:22:45 crc kubenswrapper[4846]: I1201 00:22:45.396570 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerStarted","Data":"84f8c03db75635ad2f7957991e0f51c09d7c688f353d5b6b863442484830bcc7"} Dec 01 00:22:46 crc kubenswrapper[4846]: I1201 00:22:46.402726 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerStarted","Data":"65c23beaa325f01b3e3e988b8cc5a322ca34b4587ac9a9eb1a7c0f07bd5d12c3"} Dec 01 00:22:46 crc kubenswrapper[4846]: I1201 00:22:46.404644 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd","Type":"ContainerStarted","Data":"593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c"} Dec 01 00:22:46 crc kubenswrapper[4846]: I1201 00:22:46.404772 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" containerName="manage-dockerfile" containerID="cri-o://593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c" gracePeriod=30 Dec 01 00:22:46 crc kubenswrapper[4846]: I1201 00:22:46.407599 4846 generic.go:334] "Generic (PLEG): container finished" podID="1912a951-9d01-457f-8f22-e1f954f8a699" containerID="54047ccbdd364d2e9fc32108f6563eb7aaa382791ca5870bd8bf6b4295a49a2c" exitCode=0 Dec 01 00:22:46 crc kubenswrapper[4846]: I1201 00:22:46.407651 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerDied","Data":"54047ccbdd364d2e9fc32108f6563eb7aaa382791ca5870bd8bf6b4295a49a2c"} Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.151440 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd/manage-dockerfile/0.log" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.151966 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.311733 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-blob-cache\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312103 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-ca-bundles\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312223 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-push\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312254 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px862\" (UniqueName: \"kubernetes.io/projected/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-kube-api-access-px862\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312316 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildworkdir\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312354 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312369 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-proxy-ca-bundles\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312422 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-node-pullsecrets\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312507 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildcachedir\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312561 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-pull\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312611 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-system-configs\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312633 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312616 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312650 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-root\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312742 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312821 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-run\") pod \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\" (UID: \"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd\") " Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.312912 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313165 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313167 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313224 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313196 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313203 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313237 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313297 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313328 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.313346 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.318140 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.321472 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.336552 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-kube-api-access-px862" (OuterVolumeSpecName: "kube-api-access-px862") pod "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" (UID: "57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd"). InnerVolumeSpecName "kube-api-access-px862". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.413977 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.414013 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.414026 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.414038 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.414052 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px862\" (UniqueName: \"kubernetes.io/projected/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-kube-api-access-px862\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.414067 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.414080 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.416162 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd/manage-dockerfile/0.log" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.416218 4846 generic.go:334] "Generic (PLEG): container finished" podID="57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" containerID="593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c" exitCode=1 Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.416285 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.416287 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd","Type":"ContainerDied","Data":"593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c"} Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.416439 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd","Type":"ContainerDied","Data":"b74f7a8da66523372aef5aec5928ecb4c8e592038d2ac11fda09dce39eb7bf71"} Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.416488 4846 scope.go:117] "RemoveContainer" containerID="593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.419289 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerStarted","Data":"9444d0115bc4c6633edbb44d9f51fdd24552584864198bd10a86087b46a02930"} Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.433770 4846 scope.go:117] "RemoveContainer" containerID="593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c" Dec 01 00:22:47 crc kubenswrapper[4846]: E1201 00:22:47.434221 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c\": container with ID starting with 593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c not found: ID does not exist" containerID="593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.434271 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c"} err="failed to get container status \"593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c\": rpc error: code = NotFound desc = could not find container \"593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c\": container with ID starting with 593fe684a517de7d0144a5616909dee152defd2dba84bf8968db410acf92ea2c not found: ID does not exist" Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.466309 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.480146 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 01 00:22:47 crc kubenswrapper[4846]: I1201 00:22:47.591098 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" path="/var/lib/kubelet/pods/57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd/volumes" Dec 01 00:22:48 crc kubenswrapper[4846]: I1201 00:22:48.427926 4846 generic.go:334] "Generic (PLEG): container finished" podID="1912a951-9d01-457f-8f22-e1f954f8a699" containerID="9444d0115bc4c6633edbb44d9f51fdd24552584864198bd10a86087b46a02930" exitCode=0 Dec 01 00:22:48 crc kubenswrapper[4846]: I1201 00:22:48.427962 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerDied","Data":"9444d0115bc4c6633edbb44d9f51fdd24552584864198bd10a86087b46a02930"} Dec 01 00:22:49 crc kubenswrapper[4846]: I1201 00:22:49.437358 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerStarted","Data":"69a1c682c7c0543edd7b73c343e8c172b61e06ced621680f2ffa0a63b50ddee1"} Dec 01 00:22:49 crc kubenswrapper[4846]: I1201 00:22:49.461046 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p4rtr" podStartSLOduration=6.049446425 podStartE2EDuration="8.46102191s" podCreationTimestamp="2025-12-01 00:22:41 +0000 UTC" firstStartedPulling="2025-12-01 00:22:46.409456553 +0000 UTC m=+987.190225627" lastFinishedPulling="2025-12-01 00:22:48.821032038 +0000 UTC m=+989.601801112" observedRunningTime="2025-12-01 00:22:49.45554267 +0000 UTC m=+990.236311754" watchObservedRunningTime="2025-12-01 00:22:49.46102191 +0000 UTC m=+990.241790994" Dec 01 00:22:50 crc kubenswrapper[4846]: I1201 00:22:50.445340 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" event={"ID":"6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4","Type":"ContainerStarted","Data":"90f2e056cf66d25ffecf44edb21c9e841f7eb00c5c02c97d44e58fd87e357b03"} Dec 01 00:22:50 crc kubenswrapper[4846]: I1201 00:22:50.465001 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jkskj" podStartSLOduration=-9223371997.389797 podStartE2EDuration="39.464978237s" podCreationTimestamp="2025-12-01 00:22:11 +0000 UTC" firstStartedPulling="2025-12-01 00:22:12.931140764 +0000 UTC m=+953.711909838" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:22:50.463652816 +0000 UTC m=+991.244421890" watchObservedRunningTime="2025-12-01 00:22:50.464978237 +0000 UTC m=+991.245747311" Dec 01 00:22:51 crc kubenswrapper[4846]: I1201 00:22:51.753731 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:51 crc kubenswrapper[4846]: I1201 00:22:51.756412 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:51 crc kubenswrapper[4846]: I1201 00:22:51.829398 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:52 crc kubenswrapper[4846]: I1201 00:22:52.463266 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" event={"ID":"0c7bbe0f-b67d-4752-9d38-76d426929bfa","Type":"ContainerStarted","Data":"a3b38839214be339a0ee70c7199006c252e58f92104a4e313fd440a70f46afa5"} Dec 01 00:22:52 crc kubenswrapper[4846]: I1201 00:22:52.484722 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" podStartSLOduration=-9223371998.370106 podStartE2EDuration="38.484669229s" podCreationTimestamp="2025-12-01 00:22:14 +0000 UTC" firstStartedPulling="2025-12-01 00:22:15.90294108 +0000 UTC m=+956.683710154" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:22:52.481076646 +0000 UTC m=+993.261845730" watchObservedRunningTime="2025-12-01 00:22:52.484669229 +0000 UTC m=+993.265438303" Dec 01 00:22:53 crc kubenswrapper[4846]: I1201 00:22:53.508213 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:53 crc kubenswrapper[4846]: I1201 00:22:53.564774 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p4rtr"] Dec 01 00:22:54 crc kubenswrapper[4846]: I1201 00:22:54.476345 4846 generic.go:334] "Generic (PLEG): container finished" podID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerID="65c23beaa325f01b3e3e988b8cc5a322ca34b4587ac9a9eb1a7c0f07bd5d12c3" exitCode=0 Dec 01 00:22:54 crc kubenswrapper[4846]: I1201 00:22:54.476436 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerDied","Data":"65c23beaa325f01b3e3e988b8cc5a322ca34b4587ac9a9eb1a7c0f07bd5d12c3"} Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.246253 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.419517 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.419586 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.419636 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.420239 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"026040bd9bee15e6472842d00a2ed6b5f6286335549254765c690aa85dd16d9e"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.420314 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://026040bd9bee15e6472842d00a2ed6b5f6286335549254765c690aa85dd16d9e" gracePeriod=600 Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.484663 4846 generic.go:334] "Generic (PLEG): container finished" podID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerID="785734d8679df546b55e36e27048ebca43145aa28e6bb08dc00af28acd697ecc" exitCode=0 Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.484758 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerDied","Data":"785734d8679df546b55e36e27048ebca43145aa28e6bb08dc00af28acd697ecc"} Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.485382 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p4rtr" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="registry-server" containerID="cri-o://69a1c682c7c0543edd7b73c343e8c172b61e06ced621680f2ffa0a63b50ddee1" gracePeriod=2 Dec 01 00:22:55 crc kubenswrapper[4846]: I1201 00:22:55.542663 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47/manage-dockerfile/0.log" Dec 01 00:22:56 crc kubenswrapper[4846]: I1201 00:22:56.498311 4846 generic.go:334] "Generic (PLEG): container finished" podID="1912a951-9d01-457f-8f22-e1f954f8a699" containerID="69a1c682c7c0543edd7b73c343e8c172b61e06ced621680f2ffa0a63b50ddee1" exitCode=0 Dec 01 00:22:56 crc kubenswrapper[4846]: I1201 00:22:56.498411 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerDied","Data":"69a1c682c7c0543edd7b73c343e8c172b61e06ced621680f2ffa0a63b50ddee1"} Dec 01 00:22:56 crc kubenswrapper[4846]: I1201 00:22:56.502299 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="026040bd9bee15e6472842d00a2ed6b5f6286335549254765c690aa85dd16d9e" exitCode=0 Dec 01 00:22:56 crc kubenswrapper[4846]: I1201 00:22:56.502385 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"026040bd9bee15e6472842d00a2ed6b5f6286335549254765c690aa85dd16d9e"} Dec 01 00:22:56 crc kubenswrapper[4846]: I1201 00:22:56.502659 4846 scope.go:117] "RemoveContainer" containerID="102b53cf93c8c6c4ec883c1482afd13bc556f199bcc1fe562a732d196e301581" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.420641 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.472476 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lld9h\" (UniqueName: \"kubernetes.io/projected/1912a951-9d01-457f-8f22-e1f954f8a699-kube-api-access-lld9h\") pod \"1912a951-9d01-457f-8f22-e1f954f8a699\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.472561 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-utilities\") pod \"1912a951-9d01-457f-8f22-e1f954f8a699\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.472610 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-catalog-content\") pod \"1912a951-9d01-457f-8f22-e1f954f8a699\" (UID: \"1912a951-9d01-457f-8f22-e1f954f8a699\") " Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.474540 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-utilities" (OuterVolumeSpecName: "utilities") pod "1912a951-9d01-457f-8f22-e1f954f8a699" (UID: "1912a951-9d01-457f-8f22-e1f954f8a699"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.516959 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1912a951-9d01-457f-8f22-e1f954f8a699-kube-api-access-lld9h" (OuterVolumeSpecName: "kube-api-access-lld9h") pod "1912a951-9d01-457f-8f22-e1f954f8a699" (UID: "1912a951-9d01-457f-8f22-e1f954f8a699"). InnerVolumeSpecName "kube-api-access-lld9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.567085 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerStarted","Data":"c7f1ee47fe847bcd97c39d90f55190a5c34e10dfc8ee3e92fec252ed050c3973"} Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.573885 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lld9h\" (UniqueName: \"kubernetes.io/projected/1912a951-9d01-457f-8f22-e1f954f8a699-kube-api-access-lld9h\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.573926 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.630994 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=16.843161111 podStartE2EDuration="24.630957491s" podCreationTimestamp="2025-12-01 00:22:33 +0000 UTC" firstStartedPulling="2025-12-01 00:22:37.943674768 +0000 UTC m=+978.724443832" lastFinishedPulling="2025-12-01 00:22:45.731471138 +0000 UTC m=+986.512240212" observedRunningTime="2025-12-01 00:22:57.622741106 +0000 UTC m=+998.403510180" watchObservedRunningTime="2025-12-01 00:22:57.630957491 +0000 UTC m=+998.411726565" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.637397 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1912a951-9d01-457f-8f22-e1f954f8a699" (UID: "1912a951-9d01-457f-8f22-e1f954f8a699"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.644248 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4rtr" event={"ID":"1912a951-9d01-457f-8f22-e1f954f8a699","Type":"ContainerDied","Data":"84f8c03db75635ad2f7957991e0f51c09d7c688f353d5b6b863442484830bcc7"} Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.644318 4846 scope.go:117] "RemoveContainer" containerID="69a1c682c7c0543edd7b73c343e8c172b61e06ced621680f2ffa0a63b50ddee1" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.644463 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4rtr" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.666133 4846 scope.go:117] "RemoveContainer" containerID="9444d0115bc4c6633edbb44d9f51fdd24552584864198bd10a86087b46a02930" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.677658 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1912a951-9d01-457f-8f22-e1f954f8a699-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.702012 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p4rtr"] Dec 01 00:22:57 crc kubenswrapper[4846]: I1201 00:22:57.706894 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p4rtr"] Dec 01 00:22:59 crc kubenswrapper[4846]: I1201 00:22:59.590390 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" path="/var/lib/kubelet/pods/1912a951-9d01-457f-8f22-e1f954f8a699/volumes" Dec 01 00:23:00 crc kubenswrapper[4846]: I1201 00:23:00.248737 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-4plm6" Dec 01 00:23:00 crc kubenswrapper[4846]: I1201 00:23:00.539328 4846 scope.go:117] "RemoveContainer" containerID="54047ccbdd364d2e9fc32108f6563eb7aaa382791ca5870bd8bf6b4295a49a2c" Dec 01 00:23:00 crc kubenswrapper[4846]: I1201 00:23:00.670930 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"3dfc0595400f347c5ef5f65984c9dbb8f2235f93abe92aaf32ec06033a254e0e"} Dec 01 00:23:02 crc kubenswrapper[4846]: I1201 00:23:02.686734 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"727053d3-2d7a-4a4b-8a99-3762f38c8344","Type":"ContainerStarted","Data":"ce78cf37db5b0601fca686ac2c9228086de7eef4ebcb784f7a9b47ad8150302a"} Dec 01 00:23:04 crc kubenswrapper[4846]: I1201 00:23:04.727767 4846 generic.go:334] "Generic (PLEG): container finished" podID="727053d3-2d7a-4a4b-8a99-3762f38c8344" containerID="ce78cf37db5b0601fca686ac2c9228086de7eef4ebcb784f7a9b47ad8150302a" exitCode=0 Dec 01 00:23:04 crc kubenswrapper[4846]: I1201 00:23:04.727809 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"727053d3-2d7a-4a4b-8a99-3762f38c8344","Type":"ContainerDied","Data":"ce78cf37db5b0601fca686ac2c9228086de7eef4ebcb784f7a9b47ad8150302a"} Dec 01 00:23:05 crc kubenswrapper[4846]: I1201 00:23:05.735183 4846 generic.go:334] "Generic (PLEG): container finished" podID="727053d3-2d7a-4a4b-8a99-3762f38c8344" containerID="9b6d9ff2781362c7ba4abde4bee127220953426fdbcc0d717d6ebc7b32135329" exitCode=0 Dec 01 00:23:05 crc kubenswrapper[4846]: I1201 00:23:05.735281 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"727053d3-2d7a-4a4b-8a99-3762f38c8344","Type":"ContainerDied","Data":"9b6d9ff2781362c7ba4abde4bee127220953426fdbcc0d717d6ebc7b32135329"} Dec 01 00:23:06 crc kubenswrapper[4846]: I1201 00:23:06.780069 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"727053d3-2d7a-4a4b-8a99-3762f38c8344","Type":"ContainerStarted","Data":"c9698d2812a49967064e3f54b278d8992837b8be4a064efd0667c4500457457b"} Dec 01 00:23:06 crc kubenswrapper[4846]: I1201 00:23:06.780966 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:23:20 crc kubenswrapper[4846]: I1201 00:23:20.005009 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="727053d3-2d7a-4a4b-8a99-3762f38c8344" containerName="elasticsearch" probeResult="failure" output=< Dec 01 00:23:20 crc kubenswrapper[4846]: {"timestamp": "2025-12-01T00:23:19+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 01 00:23:20 crc kubenswrapper[4846]: > Dec 01 00:23:24 crc kubenswrapper[4846]: I1201 00:23:24.465509 4846 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="727053d3-2d7a-4a4b-8a99-3762f38c8344" containerName="elasticsearch" probeResult="failure" output=< Dec 01 00:23:24 crc kubenswrapper[4846]: {"timestamp": "2025-12-01T00:23:24+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 01 00:23:24 crc kubenswrapper[4846]: > Dec 01 00:23:30 crc kubenswrapper[4846]: I1201 00:23:30.098147 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 01 00:23:30 crc kubenswrapper[4846]: I1201 00:23:30.135664 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=34.240327645 podStartE2EDuration="1m27.135635387s" podCreationTimestamp="2025-12-01 00:22:03 +0000 UTC" firstStartedPulling="2025-12-01 00:22:04.555577616 +0000 UTC m=+945.336346690" lastFinishedPulling="2025-12-01 00:22:57.450885358 +0000 UTC m=+998.231654432" observedRunningTime="2025-12-01 00:23:06.927475284 +0000 UTC m=+1007.708244358" watchObservedRunningTime="2025-12-01 00:23:30.135635387 +0000 UTC m=+1030.916404471" Dec 01 00:25:02 crc kubenswrapper[4846]: I1201 00:25:02.937881 4846 generic.go:334] "Generic (PLEG): container finished" podID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerID="c7f1ee47fe847bcd97c39d90f55190a5c34e10dfc8ee3e92fec252ed050c3973" exitCode=0 Dec 01 00:25:02 crc kubenswrapper[4846]: I1201 00:25:02.937999 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerDied","Data":"c7f1ee47fe847bcd97c39d90f55190a5c34e10dfc8ee3e92fec252ed050c3973"} Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.188844 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.226871 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-node-pullsecrets\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.226938 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildworkdir\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227024 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-pull\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227050 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-root\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227076 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-blob-cache\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227114 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildcachedir\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227156 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-system-configs\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227204 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-run\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227098 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227202 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227231 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-ca-bundles\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227331 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-proxy-ca-bundles\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227365 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-push\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227416 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nh4t\" (UniqueName: \"kubernetes.io/projected/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-kube-api-access-4nh4t\") pod \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\" (UID: \"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47\") " Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227750 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.227771 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.228100 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.228559 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.229807 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.232127 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.235542 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.236807 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-kube-api-access-4nh4t" (OuterVolumeSpecName: "kube-api-access-4nh4t") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "kube-api-access-4nh4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.246241 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.265541 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339183 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339511 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339524 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nh4t\" (UniqueName: \"kubernetes.io/projected/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-kube-api-access-4nh4t\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339536 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339547 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339559 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339572 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.339583 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.433613 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.440508 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.956923 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47","Type":"ContainerDied","Data":"dff79e5b7d696146370c22b1bc039b1d479767853ccc7feddf1f90b9b3aa7238"} Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.956963 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff79e5b7d696146370c22b1bc039b1d479767853ccc7feddf1f90b9b3aa7238" Dec 01 00:25:04 crc kubenswrapper[4846]: I1201 00:25:04.956996 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 01 00:25:06 crc kubenswrapper[4846]: I1201 00:25:06.395336 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" (UID: "5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:06 crc kubenswrapper[4846]: I1201 00:25:06.467454 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.838529 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.840047 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="registry-server" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.840127 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="registry-server" Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.840225 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="extract-utilities" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.840306 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="extract-utilities" Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.840403 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="docker-build" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.840493 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="docker-build" Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.840557 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" containerName="manage-dockerfile" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.840612 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" containerName="manage-dockerfile" Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.840718 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="git-clone" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.840801 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="git-clone" Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.840882 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="extract-content" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.840940 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="extract-content" Dec 01 00:25:08 crc kubenswrapper[4846]: E1201 00:25:08.841000 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="manage-dockerfile" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.841054 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="manage-dockerfile" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.841325 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f40d7cb-aec9-43d9-9b0b-ac5c4f555e47" containerName="docker-build" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.841417 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="57dcb580-ffb2-4ab6-a4b6-7759a5c1e4cd" containerName="manage-dockerfile" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.841489 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="1912a951-9d01-457f-8f22-e1f954f8a699" containerName="registry-server" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.842435 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.846598 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.846598 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.846607 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.852908 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:25:08 crc kubenswrapper[4846]: I1201 00:25:08.858391 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.004267 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.004801 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005004 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005155 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005295 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swqds\" (UniqueName: \"kubernetes.io/projected/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-kube-api-access-swqds\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005464 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005589 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005752 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.005907 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.006153 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.006362 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.006512 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108005 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108071 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108091 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108118 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108137 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108159 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108189 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swqds\" (UniqueName: \"kubernetes.io/projected/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-kube-api-access-swqds\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108224 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108242 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108259 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108279 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108306 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.108877 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.109199 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.109267 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.109224 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.109929 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.110123 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.110413 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.110848 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.110927 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.115886 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.116387 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.131632 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swqds\" (UniqueName: \"kubernetes.io/projected/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-kube-api-access-swqds\") pod \"smart-gateway-operator-1-build\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.163341 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:09 crc kubenswrapper[4846]: I1201 00:25:09.594051 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 01 00:25:10 crc kubenswrapper[4846]: I1201 00:25:10.003020 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78","Type":"ContainerStarted","Data":"b3583452e8ae406c4f8f7f2ff5e3a2fe2edcea37a76ccc6b4991aac1ab44fac5"} Dec 01 00:25:10 crc kubenswrapper[4846]: I1201 00:25:10.003367 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78","Type":"ContainerStarted","Data":"393b3a64ba33498360404473fdea783de6cd87bd6b1eb8f79aaf8a4b7927c936"} Dec 01 00:25:11 crc kubenswrapper[4846]: I1201 00:25:11.010659 4846 generic.go:334] "Generic (PLEG): container finished" podID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerID="b3583452e8ae406c4f8f7f2ff5e3a2fe2edcea37a76ccc6b4991aac1ab44fac5" exitCode=0 Dec 01 00:25:11 crc kubenswrapper[4846]: I1201 00:25:11.010713 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78","Type":"ContainerDied","Data":"b3583452e8ae406c4f8f7f2ff5e3a2fe2edcea37a76ccc6b4991aac1ab44fac5"} Dec 01 00:25:12 crc kubenswrapper[4846]: I1201 00:25:12.018845 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78","Type":"ContainerStarted","Data":"272b3c0f5323d9dffe8591577562515280f2821e4fd98923a9c94e138ae434be"} Dec 01 00:25:12 crc kubenswrapper[4846]: I1201 00:25:12.042890 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.042870923 podStartE2EDuration="4.042870923s" podCreationTimestamp="2025-12-01 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:25:12.040446918 +0000 UTC m=+1132.821215992" watchObservedRunningTime="2025-12-01 00:25:12.042870923 +0000 UTC m=+1132.823639997" Dec 01 00:25:19 crc kubenswrapper[4846]: I1201 00:25:19.784456 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 01 00:25:19 crc kubenswrapper[4846]: I1201 00:25:19.785407 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerName="docker-build" containerID="cri-o://272b3c0f5323d9dffe8591577562515280f2821e4fd98923a9c94e138ae434be" gracePeriod=30 Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.076484 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_d8bb7ceb-1870-4ea4-ab44-3bab4c088f78/docker-build/0.log" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.077242 4846 generic.go:334] "Generic (PLEG): container finished" podID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerID="272b3c0f5323d9dffe8591577562515280f2821e4fd98923a9c94e138ae434be" exitCode=1 Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.077299 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78","Type":"ContainerDied","Data":"272b3c0f5323d9dffe8591577562515280f2821e4fd98923a9c94e138ae434be"} Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.150670 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_d8bb7ceb-1870-4ea4-ab44-3bab4c088f78/docker-build/0.log" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.151672 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255082 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildworkdir\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255165 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-proxy-ca-bundles\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255192 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-run\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255248 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swqds\" (UniqueName: \"kubernetes.io/projected/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-kube-api-access-swqds\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255275 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-push\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255297 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildcachedir\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255322 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-ca-bundles\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255353 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-system-configs\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255395 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-root\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255422 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-node-pullsecrets\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255442 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-blob-cache\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255456 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255466 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-pull\") pod \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\" (UID: \"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78\") " Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.255722 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.256379 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.256423 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.256953 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.257021 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.257054 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.257260 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.260607 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.260828 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-kube-api-access-swqds" (OuterVolumeSpecName: "kube-api-access-swqds") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "kube-api-access-swqds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.260960 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.332257 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.356603 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.356931 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357007 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357071 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357136 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357196 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357250 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357304 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swqds\" (UniqueName: \"kubernetes.io/projected/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-kube-api-access-swqds\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357367 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.357458 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.442365 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" (UID: "d8bb7ceb-1870-4ea4-ab44-3bab4c088f78"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:25:20 crc kubenswrapper[4846]: I1201 00:25:20.458331 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.085500 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_d8bb7ceb-1870-4ea4-ab44-3bab4c088f78/docker-build/0.log" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.085927 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"d8bb7ceb-1870-4ea4-ab44-3bab4c088f78","Type":"ContainerDied","Data":"393b3a64ba33498360404473fdea783de6cd87bd6b1eb8f79aaf8a4b7927c936"} Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.085976 4846 scope.go:117] "RemoveContainer" containerID="272b3c0f5323d9dffe8591577562515280f2821e4fd98923a9c94e138ae434be" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.086018 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.122599 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.126945 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.159327 4846 scope.go:117] "RemoveContainer" containerID="b3583452e8ae406c4f8f7f2ff5e3a2fe2edcea37a76ccc6b4991aac1ab44fac5" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.383395 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 01 00:25:21 crc kubenswrapper[4846]: E1201 00:25:21.384003 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerName="docker-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.384088 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerName="docker-build" Dec 01 00:25:21 crc kubenswrapper[4846]: E1201 00:25:21.384178 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerName="manage-dockerfile" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.384245 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerName="manage-dockerfile" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.384450 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" containerName="docker-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.385381 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.388180 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.388398 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.388709 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.388906 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.401129 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472022 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472061 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472090 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472133 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472171 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472192 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472206 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472219 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6brt\" (UniqueName: \"kubernetes.io/projected/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-kube-api-access-t6brt\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472238 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472277 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472296 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.472315 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573132 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573204 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573239 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573287 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573318 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573358 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573381 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573414 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573274 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573444 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573561 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573602 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6brt\" (UniqueName: \"kubernetes.io/projected/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-kube-api-access-t6brt\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573647 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573827 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573904 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.573920 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.574134 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.574234 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.574331 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.574447 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.575052 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.578142 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.578551 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.591487 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8bb7ceb-1870-4ea4-ab44-3bab4c088f78" path="/var/lib/kubelet/pods/d8bb7ceb-1870-4ea4-ab44-3bab4c088f78/volumes" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.591838 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6brt\" (UniqueName: \"kubernetes.io/projected/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-kube-api-access-t6brt\") pod \"smart-gateway-operator-2-build\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.706444 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:25:21 crc kubenswrapper[4846]: I1201 00:25:21.925848 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Dec 01 00:25:22 crc kubenswrapper[4846]: I1201 00:25:22.094396 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerStarted","Data":"9a1ef478c1237702aa134f14e6f2f1587206b0f4d9d6982b80d33614e17181f0"} Dec 01 00:25:23 crc kubenswrapper[4846]: I1201 00:25:23.103782 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerStarted","Data":"ffbab2aef6d8c3c3b5f92f4b342bf611bee193fe00ef39f05bd7da0b1d65ae50"} Dec 01 00:25:24 crc kubenswrapper[4846]: I1201 00:25:24.111379 4846 generic.go:334] "Generic (PLEG): container finished" podID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerID="ffbab2aef6d8c3c3b5f92f4b342bf611bee193fe00ef39f05bd7da0b1d65ae50" exitCode=0 Dec 01 00:25:24 crc kubenswrapper[4846]: I1201 00:25:24.111460 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerDied","Data":"ffbab2aef6d8c3c3b5f92f4b342bf611bee193fe00ef39f05bd7da0b1d65ae50"} Dec 01 00:25:25 crc kubenswrapper[4846]: I1201 00:25:25.122461 4846 generic.go:334] "Generic (PLEG): container finished" podID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerID="216107444c67bb3ccf4a73754e52b0ced78a06546227245b4a05d9869c14fdfe" exitCode=0 Dec 01 00:25:25 crc kubenswrapper[4846]: I1201 00:25:25.122553 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerDied","Data":"216107444c67bb3ccf4a73754e52b0ced78a06546227245b4a05d9869c14fdfe"} Dec 01 00:25:25 crc kubenswrapper[4846]: I1201 00:25:25.174413 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca/manage-dockerfile/0.log" Dec 01 00:25:25 crc kubenswrapper[4846]: I1201 00:25:25.419457 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:25:25 crc kubenswrapper[4846]: I1201 00:25:25.419990 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:25:26 crc kubenswrapper[4846]: I1201 00:25:26.133382 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerStarted","Data":"7e87ced975bc0c6a7226381cfa705432d53634ea7939d0f1388bbfd295cd66df"} Dec 01 00:25:26 crc kubenswrapper[4846]: I1201 00:25:26.163907 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.163891657 podStartE2EDuration="5.163891657s" podCreationTimestamp="2025-12-01 00:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:25:26.16074741 +0000 UTC m=+1146.941516474" watchObservedRunningTime="2025-12-01 00:25:26.163891657 +0000 UTC m=+1146.944660741" Dec 01 00:25:55 crc kubenswrapper[4846]: I1201 00:25:55.419972 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:25:55 crc kubenswrapper[4846]: I1201 00:25:55.420587 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:26:25 crc kubenswrapper[4846]: I1201 00:26:25.419462 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:26:25 crc kubenswrapper[4846]: I1201 00:26:25.420038 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:26:25 crc kubenswrapper[4846]: I1201 00:26:25.420079 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:26:25 crc kubenswrapper[4846]: I1201 00:26:25.420721 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dfc0595400f347c5ef5f65984c9dbb8f2235f93abe92aaf32ec06033a254e0e"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:26:25 crc kubenswrapper[4846]: I1201 00:26:25.420783 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://3dfc0595400f347c5ef5f65984c9dbb8f2235f93abe92aaf32ec06033a254e0e" gracePeriod=600 Dec 01 00:26:26 crc kubenswrapper[4846]: I1201 00:26:26.508617 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="3dfc0595400f347c5ef5f65984c9dbb8f2235f93abe92aaf32ec06033a254e0e" exitCode=0 Dec 01 00:26:26 crc kubenswrapper[4846]: I1201 00:26:26.508981 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"3dfc0595400f347c5ef5f65984c9dbb8f2235f93abe92aaf32ec06033a254e0e"} Dec 01 00:26:26 crc kubenswrapper[4846]: I1201 00:26:26.509017 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"b0faa229a719b656486d8d8a0c45c6962db416832006ccdc888cb46d3002cabb"} Dec 01 00:26:26 crc kubenswrapper[4846]: I1201 00:26:26.509036 4846 scope.go:117] "RemoveContainer" containerID="026040bd9bee15e6472842d00a2ed6b5f6286335549254765c690aa85dd16d9e" Dec 01 00:26:47 crc kubenswrapper[4846]: I1201 00:26:47.646731 4846 generic.go:334] "Generic (PLEG): container finished" podID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerID="7e87ced975bc0c6a7226381cfa705432d53634ea7939d0f1388bbfd295cd66df" exitCode=0 Dec 01 00:26:47 crc kubenswrapper[4846]: I1201 00:26:47.646829 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerDied","Data":"7e87ced975bc0c6a7226381cfa705432d53634ea7939d0f1388bbfd295cd66df"} Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.876098 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.916551 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6brt\" (UniqueName: \"kubernetes.io/projected/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-kube-api-access-t6brt\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.916631 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-system-configs\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.916673 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-proxy-ca-bundles\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.916774 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-pull\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.916820 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-blob-cache\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.916879 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildworkdir\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.917241 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-run\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.917393 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.917594 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.918115 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.920973 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.921845 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-ca-bundles\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922001 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-node-pullsecrets\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922185 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-root\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922221 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-push\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922274 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildcachedir\") pod \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\" (UID: \"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca\") " Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922313 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922971 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.922999 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.923014 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.923027 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.923040 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.923071 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.923453 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.952578 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.952633 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-kube-api-access-t6brt" (OuterVolumeSpecName: "kube-api-access-t6brt") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "kube-api-access-t6brt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:26:48 crc kubenswrapper[4846]: I1201 00:26:48.953212 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.024013 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.024063 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.024080 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.024089 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6brt\" (UniqueName: \"kubernetes.io/projected/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-kube-api-access-t6brt\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.024098 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.181863 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.227541 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.669734 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca","Type":"ContainerDied","Data":"9a1ef478c1237702aa134f14e6f2f1587206b0f4d9d6982b80d33614e17181f0"} Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.669781 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1ef478c1237702aa134f14e6f2f1587206b0f4d9d6982b80d33614e17181f0" Dec 01 00:26:49 crc kubenswrapper[4846]: I1201 00:26:49.669866 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Dec 01 00:26:51 crc kubenswrapper[4846]: I1201 00:26:51.087567 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" (UID: "c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:26:51 crc kubenswrapper[4846]: I1201 00:26:51.153419 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.705390 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 01 00:26:53 crc kubenswrapper[4846]: E1201 00:26:53.705999 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="manage-dockerfile" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.706017 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="manage-dockerfile" Dec 01 00:26:53 crc kubenswrapper[4846]: E1201 00:26:53.706030 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="git-clone" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.706038 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="git-clone" Dec 01 00:26:53 crc kubenswrapper[4846]: E1201 00:26:53.706049 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="docker-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.706058 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="docker-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.706188 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="c71311f4-1b03-4dd4-88ad-dd7b3fd1c6ca" containerName="docker-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.706996 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.709016 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.709321 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.709600 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.710565 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.722326 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.786721 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-root\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.786790 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.786827 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.786853 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-run\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.786873 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-push\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.786925 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-system-configs\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.787017 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-buildcachedir\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.787067 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.787094 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6vf\" (UniqueName: \"kubernetes.io/projected/69fa9678-9fad-46ac-be38-f84a879236c1-kube-api-access-vd6vf\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.787120 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-buildworkdir\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.787143 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-pull\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.787206 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889170 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-buildworkdir\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889319 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-pull\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889386 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889498 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-root\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889556 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889589 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889606 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-run\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889650 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-push\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889674 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-system-configs\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.889897 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.890500 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-buildworkdir\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.890756 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-buildcachedir\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.890847 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-buildcachedir\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.890945 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.891016 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6vf\" (UniqueName: \"kubernetes.io/projected/69fa9678-9fad-46ac-be38-f84a879236c1-kube-api-access-vd6vf\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.891063 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.891522 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.891735 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-run\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.892019 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.892291 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-root\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.892428 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-system-configs\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.898519 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-push\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.898531 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-pull\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:53 crc kubenswrapper[4846]: I1201 00:26:53.909490 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6vf\" (UniqueName: \"kubernetes.io/projected/69fa9678-9fad-46ac-be38-f84a879236c1-kube-api-access-vd6vf\") pod \"sg-core-1-build\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " pod="service-telemetry/sg-core-1-build" Dec 01 00:26:54 crc kubenswrapper[4846]: I1201 00:26:54.024929 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 01 00:26:54 crc kubenswrapper[4846]: I1201 00:26:54.439569 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 01 00:26:54 crc kubenswrapper[4846]: I1201 00:26:54.704781 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69fa9678-9fad-46ac-be38-f84a879236c1","Type":"ContainerStarted","Data":"c25d315020c1113683f8a10cb84fb02b1bdfaf7688ad3c84ac10837d10c818cb"} Dec 01 00:26:55 crc kubenswrapper[4846]: E1201 00:26:55.015261 4846 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fa9678_9fad_46ac_be38_f84a879236c1.slice/crio-57f571ea15f9f6f9336891acf60328baf552628582dfddecb5b6825cc3c3e49c.scope\": RecentStats: unable to find data in memory cache]" Dec 01 00:26:55 crc kubenswrapper[4846]: I1201 00:26:55.712888 4846 generic.go:334] "Generic (PLEG): container finished" podID="69fa9678-9fad-46ac-be38-f84a879236c1" containerID="57f571ea15f9f6f9336891acf60328baf552628582dfddecb5b6825cc3c3e49c" exitCode=0 Dec 01 00:26:55 crc kubenswrapper[4846]: I1201 00:26:55.712947 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69fa9678-9fad-46ac-be38-f84a879236c1","Type":"ContainerDied","Data":"57f571ea15f9f6f9336891acf60328baf552628582dfddecb5b6825cc3c3e49c"} Dec 01 00:26:56 crc kubenswrapper[4846]: I1201 00:26:56.723579 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69fa9678-9fad-46ac-be38-f84a879236c1","Type":"ContainerStarted","Data":"ceade721681e87656e0b6df6d377b261d52a6b18fe7e2ff941037297d8ff0a33"} Dec 01 00:26:56 crc kubenswrapper[4846]: I1201 00:26:56.750492 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.7504709529999998 podStartE2EDuration="3.750470953s" podCreationTimestamp="2025-12-01 00:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:26:56.745057166 +0000 UTC m=+1237.525826260" watchObservedRunningTime="2025-12-01 00:26:56.750470953 +0000 UTC m=+1237.531240027" Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.028892 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.029712 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" containerName="docker-build" containerID="cri-o://ceade721681e87656e0b6df6d377b261d52a6b18fe7e2ff941037297d8ff0a33" gracePeriod=30 Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.781268 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_69fa9678-9fad-46ac-be38-f84a879236c1/docker-build/0.log" Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.782448 4846 generic.go:334] "Generic (PLEG): container finished" podID="69fa9678-9fad-46ac-be38-f84a879236c1" containerID="ceade721681e87656e0b6df6d377b261d52a6b18fe7e2ff941037297d8ff0a33" exitCode=1 Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.782497 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69fa9678-9fad-46ac-be38-f84a879236c1","Type":"ContainerDied","Data":"ceade721681e87656e0b6df6d377b261d52a6b18fe7e2ff941037297d8ff0a33"} Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.884950 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_69fa9678-9fad-46ac-be38-f84a879236c1/docker-build/0.log" Dec 01 00:27:04 crc kubenswrapper[4846]: I1201 00:27:04.885766 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.040839 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-buildworkdir\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.041427 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.041506 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-build-blob-cache\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.042283 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.044871 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-ca-bundles\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.044955 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-push\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045109 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-node-pullsecrets\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045178 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-system-configs\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045234 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045262 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-pull\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045348 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-proxy-ca-bundles\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045380 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-buildcachedir\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045451 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-run\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045478 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-root\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045511 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd6vf\" (UniqueName: \"kubernetes.io/projected/69fa9678-9fad-46ac-be38-f84a879236c1-kube-api-access-vd6vf\") pod \"69fa9678-9fad-46ac-be38-f84a879236c1\" (UID: \"69fa9678-9fad-46ac-be38-f84a879236c1\") " Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045642 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.045727 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.046120 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.046140 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.046154 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.046167 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/69fa9678-9fad-46ac-be38-f84a879236c1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.046179 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.046618 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.047062 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.053414 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.056868 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.056914 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fa9678-9fad-46ac-be38-f84a879236c1-kube-api-access-vd6vf" (OuterVolumeSpecName: "kube-api-access-vd6vf") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "kube-api-access-vd6vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.140489 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.147395 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.147526 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69fa9678-9fad-46ac-be38-f84a879236c1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.147585 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.147670 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd6vf\" (UniqueName: \"kubernetes.io/projected/69fa9678-9fad-46ac-be38-f84a879236c1-kube-api-access-vd6vf\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.147765 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.147825 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/69fa9678-9fad-46ac-be38-f84a879236c1-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.189263 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "69fa9678-9fad-46ac-be38-f84a879236c1" (UID: "69fa9678-9fad-46ac-be38-f84a879236c1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.248533 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/69fa9678-9fad-46ac-be38-f84a879236c1-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.624431 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 01 00:27:05 crc kubenswrapper[4846]: E1201 00:27:05.624757 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" containerName="manage-dockerfile" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.624781 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" containerName="manage-dockerfile" Dec 01 00:27:05 crc kubenswrapper[4846]: E1201 00:27:05.624817 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" containerName="docker-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.624825 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" containerName="docker-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.624963 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" containerName="docker-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.626301 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.634333 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.635005 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.635356 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.637098 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.657411 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-system-configs\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.657489 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-run\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658375 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658426 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-root\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658463 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-push\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658656 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658727 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658761 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildworkdir\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658803 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9vn\" (UniqueName: \"kubernetes.io/projected/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-kube-api-access-sj9vn\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658877 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-pull\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658907 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildcachedir\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.658939 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760231 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildcachedir\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760308 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760341 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-system-configs\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760364 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-run\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760372 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildcachedir\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760388 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760410 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-root\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760437 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-push\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760448 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760461 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760521 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.760545 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildworkdir\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761158 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-root\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761188 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-run\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761279 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildworkdir\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761358 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761356 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761628 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.761726 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9vn\" (UniqueName: \"kubernetes.io/projected/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-kube-api-access-sj9vn\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.762045 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-system-configs\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.762230 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-pull\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.764123 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-push\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.765249 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-pull\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.786547 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9vn\" (UniqueName: \"kubernetes.io/projected/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-kube-api-access-sj9vn\") pod \"sg-core-2-build\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " pod="service-telemetry/sg-core-2-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.791365 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_69fa9678-9fad-46ac-be38-f84a879236c1/docker-build/0.log" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.791694 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"69fa9678-9fad-46ac-be38-f84a879236c1","Type":"ContainerDied","Data":"c25d315020c1113683f8a10cb84fb02b1bdfaf7688ad3c84ac10837d10c818cb"} Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.791741 4846 scope.go:117] "RemoveContainer" containerID="ceade721681e87656e0b6df6d377b261d52a6b18fe7e2ff941037297d8ff0a33" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.791855 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.852768 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.857525 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.862591 4846 scope.go:117] "RemoveContainer" containerID="57f571ea15f9f6f9336891acf60328baf552628582dfddecb5b6825cc3c3e49c" Dec 01 00:27:05 crc kubenswrapper[4846]: I1201 00:27:05.958575 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 01 00:27:06 crc kubenswrapper[4846]: I1201 00:27:06.187008 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Dec 01 00:27:06 crc kubenswrapper[4846]: I1201 00:27:06.799777 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerStarted","Data":"7b1aa9a757871a3782c07146c7ad5f9affe58ccb75d799a7ec3348fb9efdb5c0"} Dec 01 00:27:06 crc kubenswrapper[4846]: I1201 00:27:06.800150 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerStarted","Data":"f32e71c61b6bb79104a041b3d9ce01ad064370348d9328dad0188c023ade8f8c"} Dec 01 00:27:07 crc kubenswrapper[4846]: I1201 00:27:07.587434 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69fa9678-9fad-46ac-be38-f84a879236c1" path="/var/lib/kubelet/pods/69fa9678-9fad-46ac-be38-f84a879236c1/volumes" Dec 01 00:27:07 crc kubenswrapper[4846]: I1201 00:27:07.808090 4846 generic.go:334] "Generic (PLEG): container finished" podID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerID="7b1aa9a757871a3782c07146c7ad5f9affe58ccb75d799a7ec3348fb9efdb5c0" exitCode=0 Dec 01 00:27:07 crc kubenswrapper[4846]: I1201 00:27:07.808143 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerDied","Data":"7b1aa9a757871a3782c07146c7ad5f9affe58ccb75d799a7ec3348fb9efdb5c0"} Dec 01 00:27:08 crc kubenswrapper[4846]: I1201 00:27:08.829498 4846 generic.go:334] "Generic (PLEG): container finished" podID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerID="3823c23109fd21e3031e32531130d7d174b34cce579e971ec6310c7fedd43948" exitCode=0 Dec 01 00:27:08 crc kubenswrapper[4846]: I1201 00:27:08.829549 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerDied","Data":"3823c23109fd21e3031e32531130d7d174b34cce579e971ec6310c7fedd43948"} Dec 01 00:27:08 crc kubenswrapper[4846]: I1201 00:27:08.864835 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_71ca190c-288b-40c4-b6d9-8d1ea69a45d8/manage-dockerfile/0.log" Dec 01 00:27:09 crc kubenswrapper[4846]: I1201 00:27:09.837846 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerStarted","Data":"289ae08e5e58cda622407ffbe70618a8a43a95d00a350d6db189f17db498b1be"} Dec 01 00:27:09 crc kubenswrapper[4846]: I1201 00:27:09.864787 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.864766083 podStartE2EDuration="4.864766083s" podCreationTimestamp="2025-12-01 00:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:27:09.859761229 +0000 UTC m=+1250.640530303" watchObservedRunningTime="2025-12-01 00:27:09.864766083 +0000 UTC m=+1250.645535157" Dec 01 00:28:25 crc kubenswrapper[4846]: I1201 00:28:25.420172 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:28:25 crc kubenswrapper[4846]: I1201 00:28:25.420736 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:28:55 crc kubenswrapper[4846]: I1201 00:28:55.419401 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:28:55 crc kubenswrapper[4846]: I1201 00:28:55.420503 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.420091 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.420629 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.420676 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.421360 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0faa229a719b656486d8d8a0c45c6962db416832006ccdc888cb46d3002cabb"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.421415 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://b0faa229a719b656486d8d8a0c45c6962db416832006ccdc888cb46d3002cabb" gracePeriod=600 Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.735855 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="b0faa229a719b656486d8d8a0c45c6962db416832006ccdc888cb46d3002cabb" exitCode=0 Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.735920 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"b0faa229a719b656486d8d8a0c45c6962db416832006ccdc888cb46d3002cabb"} Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.736315 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6"} Dec 01 00:29:25 crc kubenswrapper[4846]: I1201 00:29:25.736340 4846 scope.go:117] "RemoveContainer" containerID="3dfc0595400f347c5ef5f65984c9dbb8f2235f93abe92aaf32ec06033a254e0e" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.141502 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl"] Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.142966 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.145484 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.145586 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.149782 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl"] Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.217942 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-config-volume\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.218038 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gz48\" (UniqueName: \"kubernetes.io/projected/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-kube-api-access-9gz48\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.218202 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-secret-volume\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.319241 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-secret-volume\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.319300 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-config-volume\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.319355 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gz48\" (UniqueName: \"kubernetes.io/projected/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-kube-api-access-9gz48\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.320517 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-config-volume\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.325490 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-secret-volume\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.335407 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gz48\" (UniqueName: \"kubernetes.io/projected/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-kube-api-access-9gz48\") pod \"collect-profiles-29409150-kjhfl\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.462129 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:00 crc kubenswrapper[4846]: I1201 00:30:00.653506 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl"] Dec 01 00:30:01 crc kubenswrapper[4846]: I1201 00:30:01.224123 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" event={"ID":"9706a7cd-2129-41d5-ade5-8c1b266ccb6c","Type":"ContainerStarted","Data":"1cde02fc21a252a74212841d00449640fe08bb0c5ad61516234c1f9cac4516e6"} Dec 01 00:30:04 crc kubenswrapper[4846]: I1201 00:30:04.243333 4846 generic.go:334] "Generic (PLEG): container finished" podID="9706a7cd-2129-41d5-ade5-8c1b266ccb6c" containerID="0412bfe6c7e6b959caa034dc940a2f4e113c50ac44e373918d6653ffc480fc25" exitCode=0 Dec 01 00:30:04 crc kubenswrapper[4846]: I1201 00:30:04.243442 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" event={"ID":"9706a7cd-2129-41d5-ade5-8c1b266ccb6c","Type":"ContainerDied","Data":"0412bfe6c7e6b959caa034dc940a2f4e113c50ac44e373918d6653ffc480fc25"} Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.614558 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.794909 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gz48\" (UniqueName: \"kubernetes.io/projected/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-kube-api-access-9gz48\") pod \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.794993 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-config-volume\") pod \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.795046 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-secret-volume\") pod \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\" (UID: \"9706a7cd-2129-41d5-ade5-8c1b266ccb6c\") " Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.796300 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9706a7cd-2129-41d5-ade5-8c1b266ccb6c" (UID: "9706a7cd-2129-41d5-ade5-8c1b266ccb6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.800663 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9706a7cd-2129-41d5-ade5-8c1b266ccb6c" (UID: "9706a7cd-2129-41d5-ade5-8c1b266ccb6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.800783 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-kube-api-access-9gz48" (OuterVolumeSpecName: "kube-api-access-9gz48") pod "9706a7cd-2129-41d5-ade5-8c1b266ccb6c" (UID: "9706a7cd-2129-41d5-ade5-8c1b266ccb6c"). InnerVolumeSpecName "kube-api-access-9gz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.896236 4846 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.896291 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gz48\" (UniqueName: \"kubernetes.io/projected/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-kube-api-access-9gz48\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:05 crc kubenswrapper[4846]: I1201 00:30:05.896316 4846 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9706a7cd-2129-41d5-ade5-8c1b266ccb6c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:06 crc kubenswrapper[4846]: I1201 00:30:06.257856 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" event={"ID":"9706a7cd-2129-41d5-ade5-8c1b266ccb6c","Type":"ContainerDied","Data":"1cde02fc21a252a74212841d00449640fe08bb0c5ad61516234c1f9cac4516e6"} Dec 01 00:30:06 crc kubenswrapper[4846]: I1201 00:30:06.257897 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cde02fc21a252a74212841d00449640fe08bb0c5ad61516234c1f9cac4516e6" Dec 01 00:30:06 crc kubenswrapper[4846]: I1201 00:30:06.257915 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409150-kjhfl" Dec 01 00:30:17 crc kubenswrapper[4846]: I1201 00:30:17.327471 4846 generic.go:334] "Generic (PLEG): container finished" podID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerID="289ae08e5e58cda622407ffbe70618a8a43a95d00a350d6db189f17db498b1be" exitCode=0 Dec 01 00:30:17 crc kubenswrapper[4846]: I1201 00:30:17.327540 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerDied","Data":"289ae08e5e58cda622407ffbe70618a8a43a95d00a350d6db189f17db498b1be"} Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.601014 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765221 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-proxy-ca-bundles\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765312 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildworkdir\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765360 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-push\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765386 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-blob-cache\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765403 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-ca-bundles\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765425 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-system-configs\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765441 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-run\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765459 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildcachedir\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765500 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-root\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765548 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-node-pullsecrets\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765566 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj9vn\" (UniqueName: \"kubernetes.io/projected/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-kube-api-access-sj9vn\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.765590 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-pull\") pod \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\" (UID: \"71ca190c-288b-40c4-b6d9-8d1ea69a45d8\") " Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.766091 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.766169 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.766175 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.766349 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.766494 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.767379 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.772166 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.772258 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.775620 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.776370 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-kube-api-access-sj9vn" (OuterVolumeSpecName: "kube-api-access-sj9vn") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "kube-api-access-sj9vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867272 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867336 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867358 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867375 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867393 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867407 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867421 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867432 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj9vn\" (UniqueName: \"kubernetes.io/projected/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-kube-api-access-sj9vn\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867445 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:18 crc kubenswrapper[4846]: I1201 00:30:18.867460 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:19 crc kubenswrapper[4846]: I1201 00:30:19.122474 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:19 crc kubenswrapper[4846]: I1201 00:30:19.173063 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:19 crc kubenswrapper[4846]: I1201 00:30:19.345419 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"71ca190c-288b-40c4-b6d9-8d1ea69a45d8","Type":"ContainerDied","Data":"f32e71c61b6bb79104a041b3d9ce01ad064370348d9328dad0188c023ade8f8c"} Dec 01 00:30:19 crc kubenswrapper[4846]: I1201 00:30:19.345467 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32e71c61b6bb79104a041b3d9ce01ad064370348d9328dad0188c023ade8f8c" Dec 01 00:30:19 crc kubenswrapper[4846]: I1201 00:30:19.345497 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Dec 01 00:30:21 crc kubenswrapper[4846]: I1201 00:30:21.216907 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "71ca190c-288b-40c4-b6d9-8d1ea69a45d8" (UID: "71ca190c-288b-40c4-b6d9-8d1ea69a45d8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:21 crc kubenswrapper[4846]: I1201 00:30:21.301309 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/71ca190c-288b-40c4-b6d9-8d1ea69a45d8-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219206 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 01 00:30:23 crc kubenswrapper[4846]: E1201 00:30:23.219534 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="docker-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219550 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="docker-build" Dec 01 00:30:23 crc kubenswrapper[4846]: E1201 00:30:23.219566 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9706a7cd-2129-41d5-ade5-8c1b266ccb6c" containerName="collect-profiles" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219573 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="9706a7cd-2129-41d5-ade5-8c1b266ccb6c" containerName="collect-profiles" Dec 01 00:30:23 crc kubenswrapper[4846]: E1201 00:30:23.219586 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="manage-dockerfile" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219594 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="manage-dockerfile" Dec 01 00:30:23 crc kubenswrapper[4846]: E1201 00:30:23.219604 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="git-clone" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219611 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="git-clone" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219753 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="9706a7cd-2129-41d5-ade5-8c1b266ccb6c" containerName="collect-profiles" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.219769 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ca190c-288b-40c4-b6d9-8d1ea69a45d8" containerName="docker-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.220493 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.223921 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.227559 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.227941 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.228150 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.242261 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.329564 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.329613 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-pull\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.329992 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330024 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330053 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330078 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330102 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330118 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrvg\" (UniqueName: \"kubernetes.io/projected/75f54a65-d278-4d44-8e65-88dd35499655-kube-api-access-fqrvg\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330136 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330153 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330170 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.330201 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-push\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431496 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431554 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-pull\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431576 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431590 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431616 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431639 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431662 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431678 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrvg\" (UniqueName: \"kubernetes.io/projected/75f54a65-d278-4d44-8e65-88dd35499655-kube-api-access-fqrvg\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431786 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431815 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431841 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.431882 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-push\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432066 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432131 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432407 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432549 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432876 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432971 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.432984 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.433150 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.433452 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.437735 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-push\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.446675 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-pull\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.450284 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrvg\" (UniqueName: \"kubernetes.io/projected/75f54a65-d278-4d44-8e65-88dd35499655-kube-api-access-fqrvg\") pod \"sg-bridge-1-build\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.539030 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:23 crc kubenswrapper[4846]: I1201 00:30:23.750625 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 01 00:30:24 crc kubenswrapper[4846]: I1201 00:30:24.379792 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"75f54a65-d278-4d44-8e65-88dd35499655","Type":"ContainerStarted","Data":"b70924bf02bfbe32080882489dbebfab65bdcab151740a5c765088b90cc9fcf5"} Dec 01 00:30:25 crc kubenswrapper[4846]: I1201 00:30:25.390606 4846 generic.go:334] "Generic (PLEG): container finished" podID="75f54a65-d278-4d44-8e65-88dd35499655" containerID="1f894d8e499fc9d110382ee816954895420a02c97facbd24bfa3cd35b26fda14" exitCode=0 Dec 01 00:30:25 crc kubenswrapper[4846]: I1201 00:30:25.390904 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"75f54a65-d278-4d44-8e65-88dd35499655","Type":"ContainerDied","Data":"1f894d8e499fc9d110382ee816954895420a02c97facbd24bfa3cd35b26fda14"} Dec 01 00:30:26 crc kubenswrapper[4846]: I1201 00:30:26.399219 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"75f54a65-d278-4d44-8e65-88dd35499655","Type":"ContainerStarted","Data":"82531ccc593b911f729b728597612146674f2ac124c8f2152305c7f391a49505"} Dec 01 00:30:26 crc kubenswrapper[4846]: I1201 00:30:26.423328 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.423307307 podStartE2EDuration="3.423307307s" podCreationTimestamp="2025-12-01 00:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:30:26.422060789 +0000 UTC m=+1447.202829873" watchObservedRunningTime="2025-12-01 00:30:26.423307307 +0000 UTC m=+1447.204076381" Dec 01 00:30:32 crc kubenswrapper[4846]: I1201 00:30:32.435889 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_75f54a65-d278-4d44-8e65-88dd35499655/docker-build/0.log" Dec 01 00:30:32 crc kubenswrapper[4846]: I1201 00:30:32.437997 4846 generic.go:334] "Generic (PLEG): container finished" podID="75f54a65-d278-4d44-8e65-88dd35499655" containerID="82531ccc593b911f729b728597612146674f2ac124c8f2152305c7f391a49505" exitCode=1 Dec 01 00:30:32 crc kubenswrapper[4846]: I1201 00:30:32.438084 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"75f54a65-d278-4d44-8e65-88dd35499655","Type":"ContainerDied","Data":"82531ccc593b911f729b728597612146674f2ac124c8f2152305c7f391a49505"} Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.513626 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.689119 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_75f54a65-d278-4d44-8e65-88dd35499655/docker-build/0.log" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.689767 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.870111 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-run\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.870187 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-system-configs\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.870226 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-build-blob-cache\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.870313 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-push\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871322 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871443 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqrvg\" (UniqueName: \"kubernetes.io/projected/75f54a65-d278-4d44-8e65-88dd35499655-kube-api-access-fqrvg\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871510 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-proxy-ca-bundles\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871540 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-buildworkdir\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871568 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-buildcachedir\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871610 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-node-pullsecrets\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871637 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-root\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871663 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-ca-bundles\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871713 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-pull\") pod \"75f54a65-d278-4d44-8e65-88dd35499655\" (UID: \"75f54a65-d278-4d44-8e65-88dd35499655\") " Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.871941 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.872031 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873164 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873306 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873466 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873499 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873505 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873604 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873626 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/75f54a65-d278-4d44-8e65-88dd35499655-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873645 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.873663 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/75f54a65-d278-4d44-8e65-88dd35499655-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.880899 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.881067 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f54a65-d278-4d44-8e65-88dd35499655-kube-api-access-fqrvg" (OuterVolumeSpecName: "kube-api-access-fqrvg") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "kube-api-access-fqrvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.881749 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.945161 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.975115 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.975147 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.975157 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.975166 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/75f54a65-d278-4d44-8e65-88dd35499655-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.975174 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqrvg\" (UniqueName: \"kubernetes.io/projected/75f54a65-d278-4d44-8e65-88dd35499655-kube-api-access-fqrvg\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:33 crc kubenswrapper[4846]: I1201 00:30:33.975185 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.280612 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "75f54a65-d278-4d44-8e65-88dd35499655" (UID: "75f54a65-d278-4d44-8e65-88dd35499655"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.380477 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/75f54a65-d278-4d44-8e65-88dd35499655-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.454928 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_75f54a65-d278-4d44-8e65-88dd35499655/docker-build/0.log" Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.455548 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"75f54a65-d278-4d44-8e65-88dd35499655","Type":"ContainerDied","Data":"b70924bf02bfbe32080882489dbebfab65bdcab151740a5c765088b90cc9fcf5"} Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.455595 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70924bf02bfbe32080882489dbebfab65bdcab151740a5c765088b90cc9fcf5" Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.455746 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.495369 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 01 00:30:34 crc kubenswrapper[4846]: I1201 00:30:34.507962 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.137730 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 01 00:30:35 crc kubenswrapper[4846]: E1201 00:30:35.138127 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f54a65-d278-4d44-8e65-88dd35499655" containerName="manage-dockerfile" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.138148 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f54a65-d278-4d44-8e65-88dd35499655" containerName="manage-dockerfile" Dec 01 00:30:35 crc kubenswrapper[4846]: E1201 00:30:35.138177 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f54a65-d278-4d44-8e65-88dd35499655" containerName="docker-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.138186 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f54a65-d278-4d44-8e65-88dd35499655" containerName="docker-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.138361 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f54a65-d278-4d44-8e65-88dd35499655" containerName="docker-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.139552 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.143837 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.143997 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.144051 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.144705 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.206225 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.308792 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-push\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.308882 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.308981 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309016 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309047 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpfl\" (UniqueName: \"kubernetes.io/projected/47ba426d-4c2b-4841-81d2-10d884b4556e-kube-api-access-gfpfl\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309065 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309156 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309221 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-pull\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309268 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309317 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309365 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.309387 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410445 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410521 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410568 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410591 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410631 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-push\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410663 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410736 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410761 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410798 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpfl\" (UniqueName: \"kubernetes.io/projected/47ba426d-4c2b-4841-81d2-10d884b4556e-kube-api-access-gfpfl\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410824 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410854 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410855 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410965 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.411057 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.410882 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-pull\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.411705 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.411921 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.412492 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.412547 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.412627 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.413117 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.416103 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-push\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.416626 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-pull\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.436716 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpfl\" (UniqueName: \"kubernetes.io/projected/47ba426d-4c2b-4841-81d2-10d884b4556e-kube-api-access-gfpfl\") pod \"sg-bridge-2-build\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.455551 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.591470 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f54a65-d278-4d44-8e65-88dd35499655" path="/var/lib/kubelet/pods/75f54a65-d278-4d44-8e65-88dd35499655/volumes" Dec 01 00:30:35 crc kubenswrapper[4846]: I1201 00:30:35.659051 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Dec 01 00:30:36 crc kubenswrapper[4846]: I1201 00:30:36.469217 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerStarted","Data":"a023ac34404b0523ed4c134f4d93354501796d8ca6e2b76c3fcd22512ae27e7c"} Dec 01 00:30:36 crc kubenswrapper[4846]: I1201 00:30:36.469519 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerStarted","Data":"1518bf6b698431011ff89e10747fd06a9e17bc6247b1002908b6f155fedd7b60"} Dec 01 00:30:37 crc kubenswrapper[4846]: I1201 00:30:37.482099 4846 generic.go:334] "Generic (PLEG): container finished" podID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerID="a023ac34404b0523ed4c134f4d93354501796d8ca6e2b76c3fcd22512ae27e7c" exitCode=0 Dec 01 00:30:37 crc kubenswrapper[4846]: I1201 00:30:37.482189 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerDied","Data":"a023ac34404b0523ed4c134f4d93354501796d8ca6e2b76c3fcd22512ae27e7c"} Dec 01 00:30:38 crc kubenswrapper[4846]: I1201 00:30:38.493457 4846 generic.go:334] "Generic (PLEG): container finished" podID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerID="3c5ba0c7586261adb8bf3406b562bef238bf23faea976dbef5fa27f80a726f95" exitCode=0 Dec 01 00:30:38 crc kubenswrapper[4846]: I1201 00:30:38.493528 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerDied","Data":"3c5ba0c7586261adb8bf3406b562bef238bf23faea976dbef5fa27f80a726f95"} Dec 01 00:30:38 crc kubenswrapper[4846]: I1201 00:30:38.543215 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_47ba426d-4c2b-4841-81d2-10d884b4556e/manage-dockerfile/0.log" Dec 01 00:30:39 crc kubenswrapper[4846]: I1201 00:30:39.506021 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerStarted","Data":"636d418930b0260d49764fa4fef67d4b7e2151c1705d2060677de2db19e15c12"} Dec 01 00:30:39 crc kubenswrapper[4846]: I1201 00:30:39.549904 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=4.549867282 podStartE2EDuration="4.549867282s" podCreationTimestamp="2025-12-01 00:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:30:39.547598392 +0000 UTC m=+1460.328367486" watchObservedRunningTime="2025-12-01 00:30:39.549867282 +0000 UTC m=+1460.330636356" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.651560 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l952x"] Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.653533 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.662736 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l952x"] Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.823147 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k467f\" (UniqueName: \"kubernetes.io/projected/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-kube-api-access-k467f\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.823205 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-utilities\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.823237 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-catalog-content\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.924553 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k467f\" (UniqueName: \"kubernetes.io/projected/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-kube-api-access-k467f\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.924594 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-utilities\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.924627 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-catalog-content\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.925101 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-utilities\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.925117 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-catalog-content\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.958021 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k467f\" (UniqueName: \"kubernetes.io/projected/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-kube-api-access-k467f\") pod \"redhat-operators-l952x\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:06 crc kubenswrapper[4846]: I1201 00:31:06.978124 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:07 crc kubenswrapper[4846]: I1201 00:31:07.406853 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l952x"] Dec 01 00:31:07 crc kubenswrapper[4846]: I1201 00:31:07.713204 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerStarted","Data":"8cde0fcac420511846c0270c0bb0d7338cbac3e7e86b1ae663517474e87ad732"} Dec 01 00:31:07 crc kubenswrapper[4846]: I1201 00:31:07.713726 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerStarted","Data":"7d224b344181902786a1831afbf8a6caa822e677087e9eb680e5f57eda7b4cb8"} Dec 01 00:31:08 crc kubenswrapper[4846]: I1201 00:31:08.720841 4846 generic.go:334] "Generic (PLEG): container finished" podID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerID="8cde0fcac420511846c0270c0bb0d7338cbac3e7e86b1ae663517474e87ad732" exitCode=0 Dec 01 00:31:08 crc kubenswrapper[4846]: I1201 00:31:08.720880 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerDied","Data":"8cde0fcac420511846c0270c0bb0d7338cbac3e7e86b1ae663517474e87ad732"} Dec 01 00:31:08 crc kubenswrapper[4846]: I1201 00:31:08.724950 4846 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 00:31:10 crc kubenswrapper[4846]: I1201 00:31:10.739107 4846 generic.go:334] "Generic (PLEG): container finished" podID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerID="d39c243aa7166dfe0d38aa8de5de4922d8c2b85705ca262abf386425c7f07fcb" exitCode=0 Dec 01 00:31:10 crc kubenswrapper[4846]: I1201 00:31:10.739188 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerDied","Data":"d39c243aa7166dfe0d38aa8de5de4922d8c2b85705ca262abf386425c7f07fcb"} Dec 01 00:31:11 crc kubenswrapper[4846]: I1201 00:31:11.748443 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerStarted","Data":"f5f4c292ffd102f74e1a6575006677457c66a1672fe41020953bc6651242c8bf"} Dec 01 00:31:11 crc kubenswrapper[4846]: I1201 00:31:11.772068 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l952x" podStartSLOduration=3.320990915 podStartE2EDuration="5.772045785s" podCreationTimestamp="2025-12-01 00:31:06 +0000 UTC" firstStartedPulling="2025-12-01 00:31:08.724596431 +0000 UTC m=+1489.505365505" lastFinishedPulling="2025-12-01 00:31:11.175651301 +0000 UTC m=+1491.956420375" observedRunningTime="2025-12-01 00:31:11.766524803 +0000 UTC m=+1492.547293897" watchObservedRunningTime="2025-12-01 00:31:11.772045785 +0000 UTC m=+1492.552814859" Dec 01 00:31:16 crc kubenswrapper[4846]: I1201 00:31:16.978904 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:16 crc kubenswrapper[4846]: I1201 00:31:16.979225 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:17 crc kubenswrapper[4846]: I1201 00:31:17.019759 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:17 crc kubenswrapper[4846]: I1201 00:31:17.834240 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:19 crc kubenswrapper[4846]: I1201 00:31:19.127340 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l952x"] Dec 01 00:31:19 crc kubenswrapper[4846]: I1201 00:31:19.797256 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l952x" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="registry-server" containerID="cri-o://f5f4c292ffd102f74e1a6575006677457c66a1672fe41020953bc6651242c8bf" gracePeriod=2 Dec 01 00:31:21 crc kubenswrapper[4846]: I1201 00:31:21.813029 4846 generic.go:334] "Generic (PLEG): container finished" podID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerID="f5f4c292ffd102f74e1a6575006677457c66a1672fe41020953bc6651242c8bf" exitCode=0 Dec 01 00:31:21 crc kubenswrapper[4846]: I1201 00:31:21.813081 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerDied","Data":"f5f4c292ffd102f74e1a6575006677457c66a1672fe41020953bc6651242c8bf"} Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.464344 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.641212 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-utilities\") pod \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.641271 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k467f\" (UniqueName: \"kubernetes.io/projected/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-kube-api-access-k467f\") pod \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.641291 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-catalog-content\") pod \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\" (UID: \"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0\") " Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.642567 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-utilities" (OuterVolumeSpecName: "utilities") pod "7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" (UID: "7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.651810 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-kube-api-access-k467f" (OuterVolumeSpecName: "kube-api-access-k467f") pod "7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" (UID: "7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0"). InnerVolumeSpecName "kube-api-access-k467f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.742676 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.743039 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k467f\" (UniqueName: \"kubernetes.io/projected/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-kube-api-access-k467f\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.757292 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" (UID: "7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.821873 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l952x" event={"ID":"7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0","Type":"ContainerDied","Data":"7d224b344181902786a1831afbf8a6caa822e677087e9eb680e5f57eda7b4cb8"} Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.821926 4846 scope.go:117] "RemoveContainer" containerID="f5f4c292ffd102f74e1a6575006677457c66a1672fe41020953bc6651242c8bf" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.821953 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l952x" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.843528 4846 scope.go:117] "RemoveContainer" containerID="d39c243aa7166dfe0d38aa8de5de4922d8c2b85705ca262abf386425c7f07fcb" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.843900 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.868032 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l952x"] Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.874849 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l952x"] Dec 01 00:31:22 crc kubenswrapper[4846]: I1201 00:31:22.876595 4846 scope.go:117] "RemoveContainer" containerID="8cde0fcac420511846c0270c0bb0d7338cbac3e7e86b1ae663517474e87ad732" Dec 01 00:31:23 crc kubenswrapper[4846]: I1201 00:31:23.587427 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" path="/var/lib/kubelet/pods/7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0/volumes" Dec 01 00:31:25 crc kubenswrapper[4846]: I1201 00:31:25.419620 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:31:25 crc kubenswrapper[4846]: I1201 00:31:25.420198 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:31:31 crc kubenswrapper[4846]: I1201 00:31:31.885864 4846 generic.go:334] "Generic (PLEG): container finished" podID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerID="636d418930b0260d49764fa4fef67d4b7e2151c1705d2060677de2db19e15c12" exitCode=0 Dec 01 00:31:31 crc kubenswrapper[4846]: I1201 00:31:31.885960 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerDied","Data":"636d418930b0260d49764fa4fef67d4b7e2151c1705d2060677de2db19e15c12"} Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.138662 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239494 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-buildworkdir\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239601 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfpfl\" (UniqueName: \"kubernetes.io/projected/47ba426d-4c2b-4841-81d2-10d884b4556e-kube-api-access-gfpfl\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239656 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-root\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239703 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-pull\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239734 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-buildcachedir\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239769 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-ca-bundles\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239805 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-build-blob-cache\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239852 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-run\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239881 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-node-pullsecrets\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239911 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-system-configs\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239943 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-proxy-ca-bundles\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.239986 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-push\") pod \"47ba426d-4c2b-4841-81d2-10d884b4556e\" (UID: \"47ba426d-4c2b-4841-81d2-10d884b4556e\") " Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.240324 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.241218 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.241587 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.241594 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.241772 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.241863 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.241069 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.245630 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.245720 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.246002 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ba426d-4c2b-4841-81d2-10d884b4556e-kube-api-access-gfpfl" (OuterVolumeSpecName: "kube-api-access-gfpfl") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "kube-api-access-gfpfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341620 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfpfl\" (UniqueName: \"kubernetes.io/projected/47ba426d-4c2b-4841-81d2-10d884b4556e-kube-api-access-gfpfl\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341664 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341679 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341711 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341727 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341738 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/47ba426d-4c2b-4841-81d2-10d884b4556e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341749 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341760 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47ba426d-4c2b-4841-81d2-10d884b4556e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341773 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/47ba426d-4c2b-4841-81d2-10d884b4556e-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.341785 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.354272 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.442822 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.898448 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "47ba426d-4c2b-4841-81d2-10d884b4556e" (UID: "47ba426d-4c2b-4841-81d2-10d884b4556e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.902024 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"47ba426d-4c2b-4841-81d2-10d884b4556e","Type":"ContainerDied","Data":"1518bf6b698431011ff89e10747fd06a9e17bc6247b1002908b6f155fedd7b60"} Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.902056 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1518bf6b698431011ff89e10747fd06a9e17bc6247b1002908b6f155fedd7b60" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.902120 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Dec 01 00:31:33 crc kubenswrapper[4846]: I1201 00:31:33.950162 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/47ba426d-4c2b-4841-81d2-10d884b4556e-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.143295 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 01 00:31:38 crc kubenswrapper[4846]: E1201 00:31:38.148347 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="extract-utilities" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148377 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="extract-utilities" Dec 01 00:31:38 crc kubenswrapper[4846]: E1201 00:31:38.148393 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="git-clone" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148402 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="git-clone" Dec 01 00:31:38 crc kubenswrapper[4846]: E1201 00:31:38.148417 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="manage-dockerfile" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148431 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="manage-dockerfile" Dec 01 00:31:38 crc kubenswrapper[4846]: E1201 00:31:38.148446 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="extract-content" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148455 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="extract-content" Dec 01 00:31:38 crc kubenswrapper[4846]: E1201 00:31:38.148465 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="docker-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148473 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="docker-build" Dec 01 00:31:38 crc kubenswrapper[4846]: E1201 00:31:38.148492 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="registry-server" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148500 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="registry-server" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148636 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3dbd96-e7ae-434c-8fa5-1dd27cf4ecb0" containerName="registry-server" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.148653 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ba426d-4c2b-4841-81d2-10d884b4556e" containerName="docker-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.149522 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.152054 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.152197 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.152316 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.156901 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.162803 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206346 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206384 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206404 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206422 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206442 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206458 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206480 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206506 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206526 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206551 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvwdj\" (UniqueName: \"kubernetes.io/projected/a995f512-2ba1-4e3b-96bb-3c81301ffc72-kube-api-access-lvwdj\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206569 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.206584 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.308236 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.308815 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.308957 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309137 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309230 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309234 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309285 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309326 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309384 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309414 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309450 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvwdj\" (UniqueName: \"kubernetes.io/projected/a995f512-2ba1-4e3b-96bb-3c81301ffc72-kube-api-access-lvwdj\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309470 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309493 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309620 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309637 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309285 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309775 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.309813 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.310181 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.310192 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.310574 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.314933 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.316503 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.331063 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvwdj\" (UniqueName: \"kubernetes.io/projected/a995f512-2ba1-4e3b-96bb-3c81301ffc72-kube-api-access-lvwdj\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.470620 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.675634 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 01 00:31:38 crc kubenswrapper[4846]: I1201 00:31:38.933035 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a995f512-2ba1-4e3b-96bb-3c81301ffc72","Type":"ContainerStarted","Data":"3863f776204a8f1520691b55b78aeba65615d9412bae17feb8db2578da5939cc"} Dec 01 00:31:39 crc kubenswrapper[4846]: I1201 00:31:39.941300 4846 generic.go:334] "Generic (PLEG): container finished" podID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerID="548301fea0d22349553efd53a0da96e7ee408b19eae7454f4b0faedd44362483" exitCode=0 Dec 01 00:31:39 crc kubenswrapper[4846]: I1201 00:31:39.941367 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a995f512-2ba1-4e3b-96bb-3c81301ffc72","Type":"ContainerDied","Data":"548301fea0d22349553efd53a0da96e7ee408b19eae7454f4b0faedd44362483"} Dec 01 00:31:40 crc kubenswrapper[4846]: I1201 00:31:40.949630 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a995f512-2ba1-4e3b-96bb-3c81301ffc72","Type":"ContainerStarted","Data":"3cd27141ac28ecc09221cf09fc132b37858a2c6e3518e645b38cd19fc26abef7"} Dec 01 00:31:48 crc kubenswrapper[4846]: I1201 00:31:48.553675 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=10.553656336 podStartE2EDuration="10.553656336s" podCreationTimestamp="2025-12-01 00:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:31:40.975518997 +0000 UTC m=+1521.756288071" watchObservedRunningTime="2025-12-01 00:31:48.553656336 +0000 UTC m=+1529.334425410" Dec 01 00:31:48 crc kubenswrapper[4846]: I1201 00:31:48.556103 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 01 00:31:48 crc kubenswrapper[4846]: I1201 00:31:48.556468 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerName="docker-build" containerID="cri-o://3cd27141ac28ecc09221cf09fc132b37858a2c6e3518e645b38cd19fc26abef7" gracePeriod=30 Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.008291 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a995f512-2ba1-4e3b-96bb-3c81301ffc72/docker-build/0.log" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.009337 4846 generic.go:334] "Generic (PLEG): container finished" podID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerID="3cd27141ac28ecc09221cf09fc132b37858a2c6e3518e645b38cd19fc26abef7" exitCode=1 Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.009372 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a995f512-2ba1-4e3b-96bb-3c81301ffc72","Type":"ContainerDied","Data":"3cd27141ac28ecc09221cf09fc132b37858a2c6e3518e645b38cd19fc26abef7"} Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.507027 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a995f512-2ba1-4e3b-96bb-3c81301ffc72/docker-build/0.log" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.507745 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643161 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-proxy-ca-bundles\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643255 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-push\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643284 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-system-configs\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643358 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-run\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643386 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildcachedir\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643407 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-ca-bundles\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643431 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-root\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643454 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-blob-cache\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643485 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildworkdir\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643508 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvwdj\" (UniqueName: \"kubernetes.io/projected/a995f512-2ba1-4e3b-96bb-3c81301ffc72-kube-api-access-lvwdj\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643535 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-pull\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643563 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-node-pullsecrets\") pod \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\" (UID: \"a995f512-2ba1-4e3b-96bb-3c81301ffc72\") " Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.643808 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.644080 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.644216 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.644492 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.644505 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.644664 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.645397 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.649504 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.653991 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a995f512-2ba1-4e3b-96bb-3c81301ffc72-kube-api-access-lvwdj" (OuterVolumeSpecName: "kube-api-access-lvwdj") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "kube-api-access-lvwdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.654759 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.697712 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744677 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744736 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744748 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744758 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744769 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744779 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744790 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744801 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvwdj\" (UniqueName: \"kubernetes.io/projected/a995f512-2ba1-4e3b-96bb-3c81301ffc72-kube-api-access-lvwdj\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744810 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/a995f512-2ba1-4e3b-96bb-3c81301ffc72-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744819 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a995f512-2ba1-4e3b-96bb-3c81301ffc72-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.744830 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a995f512-2ba1-4e3b-96bb-3c81301ffc72-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:49 crc kubenswrapper[4846]: I1201 00:31:49.980275 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a995f512-2ba1-4e3b-96bb-3c81301ffc72" (UID: "a995f512-2ba1-4e3b-96bb-3c81301ffc72"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.018403 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_a995f512-2ba1-4e3b-96bb-3c81301ffc72/docker-build/0.log" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.018997 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"a995f512-2ba1-4e3b-96bb-3c81301ffc72","Type":"ContainerDied","Data":"3863f776204a8f1520691b55b78aeba65615d9412bae17feb8db2578da5939cc"} Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.019043 4846 scope.go:117] "RemoveContainer" containerID="3cd27141ac28ecc09221cf09fc132b37858a2c6e3518e645b38cd19fc26abef7" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.019114 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.038939 4846 scope.go:117] "RemoveContainer" containerID="548301fea0d22349553efd53a0da96e7ee408b19eae7454f4b0faedd44362483" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.048538 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a995f512-2ba1-4e3b-96bb-3c81301ffc72-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.066762 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.072995 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.434115 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 01 00:31:50 crc kubenswrapper[4846]: E1201 00:31:50.434388 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerName="docker-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.434404 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerName="docker-build" Dec 01 00:31:50 crc kubenswrapper[4846]: E1201 00:31:50.434414 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerName="manage-dockerfile" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.434421 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerName="manage-dockerfile" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.434551 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" containerName="docker-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.435422 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.439934 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.440300 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.441125 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.441142 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.451873 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553626 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553684 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553758 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553781 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bbd\" (UniqueName: \"kubernetes.io/projected/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-kube-api-access-67bbd\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553804 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553835 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553854 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553869 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553922 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553971 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.553994 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.554011 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.655757 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bbd\" (UniqueName: \"kubernetes.io/projected/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-kube-api-access-67bbd\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.655836 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.655899 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.655941 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.655980 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656014 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656065 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656116 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656183 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656278 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656316 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.656360 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.657490 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.657707 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.657854 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.658002 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.658162 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.659774 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.659792 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.660089 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.660215 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.661965 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.662363 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.681854 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bbd\" (UniqueName: \"kubernetes.io/projected/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-kube-api-access-67bbd\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:50 crc kubenswrapper[4846]: I1201 00:31:50.790598 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:31:51 crc kubenswrapper[4846]: I1201 00:31:51.016067 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Dec 01 00:31:51 crc kubenswrapper[4846]: I1201 00:31:51.591624 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a995f512-2ba1-4e3b-96bb-3c81301ffc72" path="/var/lib/kubelet/pods/a995f512-2ba1-4e3b-96bb-3c81301ffc72/volumes" Dec 01 00:31:52 crc kubenswrapper[4846]: I1201 00:31:52.038329 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerStarted","Data":"0d73217bacf673bbed8fa0899bd9b0a6847316ba44cad57434f2471dc21cb2d4"} Dec 01 00:31:52 crc kubenswrapper[4846]: I1201 00:31:52.038380 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerStarted","Data":"a07fe3a9168265de433d987edb1963c340debfb0e9ea4dffa4e6ac981875259e"} Dec 01 00:31:53 crc kubenswrapper[4846]: I1201 00:31:53.046487 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerID="0d73217bacf673bbed8fa0899bd9b0a6847316ba44cad57434f2471dc21cb2d4" exitCode=0 Dec 01 00:31:53 crc kubenswrapper[4846]: I1201 00:31:53.046715 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerDied","Data":"0d73217bacf673bbed8fa0899bd9b0a6847316ba44cad57434f2471dc21cb2d4"} Dec 01 00:31:54 crc kubenswrapper[4846]: I1201 00:31:54.055193 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerID="199a1689ae8bc35a548203e944e52e645a9a7bfdfd386c65510aed8fd1d7f68c" exitCode=0 Dec 01 00:31:54 crc kubenswrapper[4846]: I1201 00:31:54.055236 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerDied","Data":"199a1689ae8bc35a548203e944e52e645a9a7bfdfd386c65510aed8fd1d7f68c"} Dec 01 00:31:54 crc kubenswrapper[4846]: I1201 00:31:54.092399 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_b4dfb7c7-2c33-4c1e-bf23-80255e63afa4/manage-dockerfile/0.log" Dec 01 00:31:55 crc kubenswrapper[4846]: I1201 00:31:55.064088 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerStarted","Data":"45bd2c7b1a665a30d3353244aaa1afa0e891742b9e1aba86980fe4f5bc93af99"} Dec 01 00:31:55 crc kubenswrapper[4846]: I1201 00:31:55.102579 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.102552527 podStartE2EDuration="5.102552527s" podCreationTimestamp="2025-12-01 00:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:31:55.09304038 +0000 UTC m=+1535.873809464" watchObservedRunningTime="2025-12-01 00:31:55.102552527 +0000 UTC m=+1535.883321621" Dec 01 00:31:55 crc kubenswrapper[4846]: I1201 00:31:55.419559 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:31:55 crc kubenswrapper[4846]: I1201 00:31:55.419630 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:32:25 crc kubenswrapper[4846]: I1201 00:32:25.420236 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:32:25 crc kubenswrapper[4846]: I1201 00:32:25.420808 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:32:25 crc kubenswrapper[4846]: I1201 00:32:25.420853 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:32:25 crc kubenswrapper[4846]: I1201 00:32:25.421435 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:32:25 crc kubenswrapper[4846]: I1201 00:32:25.421498 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" gracePeriod=600 Dec 01 00:32:25 crc kubenswrapper[4846]: E1201 00:32:25.541547 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:32:26 crc kubenswrapper[4846]: I1201 00:32:26.258341 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" exitCode=0 Dec 01 00:32:26 crc kubenswrapper[4846]: I1201 00:32:26.258413 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6"} Dec 01 00:32:26 crc kubenswrapper[4846]: I1201 00:32:26.258464 4846 scope.go:117] "RemoveContainer" containerID="b0faa229a719b656486d8d8a0c45c6962db416832006ccdc888cb46d3002cabb" Dec 01 00:32:26 crc kubenswrapper[4846]: I1201 00:32:26.259024 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:32:26 crc kubenswrapper[4846]: E1201 00:32:26.259286 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:32:40 crc kubenswrapper[4846]: I1201 00:32:40.580543 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:32:40 crc kubenswrapper[4846]: E1201 00:32:40.581283 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:32:51 crc kubenswrapper[4846]: I1201 00:32:51.446578 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerID="45bd2c7b1a665a30d3353244aaa1afa0e891742b9e1aba86980fe4f5bc93af99" exitCode=0 Dec 01 00:32:51 crc kubenswrapper[4846]: I1201 00:32:51.446733 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerDied","Data":"45bd2c7b1a665a30d3353244aaa1afa0e891742b9e1aba86980fe4f5bc93af99"} Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.580888 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:32:52 crc kubenswrapper[4846]: E1201 00:32:52.581286 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.712843 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.800953 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-root\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801040 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildcachedir\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801078 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-push\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801126 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-proxy-ca-bundles\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801178 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-blob-cache\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801259 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-ca-bundles\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801286 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-node-pullsecrets\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801319 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-pull\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801371 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67bbd\" (UniqueName: \"kubernetes.io/projected/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-kube-api-access-67bbd\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801421 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-run\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801470 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildworkdir\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.801533 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-system-configs\") pod \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\" (UID: \"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4\") " Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.802997 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.803104 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.803164 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.803470 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.803645 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.805697 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.808430 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.809872 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.809915 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-kube-api-access-67bbd" (OuterVolumeSpecName: "kube-api-access-67bbd") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "kube-api-access-67bbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.810810 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.895519 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903298 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903494 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903553 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903626 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67bbd\" (UniqueName: \"kubernetes.io/projected/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-kube-api-access-67bbd\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903712 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903772 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903834 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.903968 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.904262 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.904346 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:52 crc kubenswrapper[4846]: I1201 00:32:52.904404 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:32:53 crc kubenswrapper[4846]: I1201 00:32:53.461990 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"b4dfb7c7-2c33-4c1e-bf23-80255e63afa4","Type":"ContainerDied","Data":"a07fe3a9168265de433d987edb1963c340debfb0e9ea4dffa4e6ac981875259e"} Dec 01 00:32:53 crc kubenswrapper[4846]: I1201 00:32:53.462037 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07fe3a9168265de433d987edb1963c340debfb0e9ea4dffa4e6ac981875259e" Dec 01 00:32:53 crc kubenswrapper[4846]: I1201 00:32:53.462104 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Dec 01 00:32:53 crc kubenswrapper[4846]: I1201 00:32:53.642969 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" (UID: "b4dfb7c7-2c33-4c1e-bf23-80255e63afa4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:32:53 crc kubenswrapper[4846]: I1201 00:32:53.714880 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4dfb7c7-2c33-4c1e-bf23-80255e63afa4-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.747220 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 01 00:33:02 crc kubenswrapper[4846]: E1201 00:33:02.748532 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="git-clone" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.748552 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="git-clone" Dec 01 00:33:02 crc kubenswrapper[4846]: E1201 00:33:02.748565 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="docker-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.748572 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="docker-build" Dec 01 00:33:02 crc kubenswrapper[4846]: E1201 00:33:02.748586 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="manage-dockerfile" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.748594 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="manage-dockerfile" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.748803 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4dfb7c7-2c33-4c1e-bf23-80255e63afa4" containerName="docker-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.749766 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.766978 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.767631 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.768743 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.769321 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.770122 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840102 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840180 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840222 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840264 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmvr\" (UniqueName: \"kubernetes.io/projected/ea2fd04c-692b-48a9-85fc-021d56460a01-kube-api-access-pwmvr\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840302 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840410 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840450 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840521 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840567 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840588 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840610 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.840702 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941573 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941663 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941738 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941770 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941804 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941830 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941851 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941877 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941904 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmvr\" (UniqueName: \"kubernetes.io/projected/ea2fd04c-692b-48a9-85fc-021d56460a01-kube-api-access-pwmvr\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941932 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.941971 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.942004 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.942590 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.942676 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.942890 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.943335 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.943588 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.943938 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.943964 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.944024 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.944348 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.949457 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.949539 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:02 crc kubenswrapper[4846]: I1201 00:33:02.962276 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmvr\" (UniqueName: \"kubernetes.io/projected/ea2fd04c-692b-48a9-85fc-021d56460a01-kube-api-access-pwmvr\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:03 crc kubenswrapper[4846]: I1201 00:33:03.091878 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:03 crc kubenswrapper[4846]: I1201 00:33:03.333639 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 01 00:33:03 crc kubenswrapper[4846]: I1201 00:33:03.525862 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"ea2fd04c-692b-48a9-85fc-021d56460a01","Type":"ContainerStarted","Data":"40c690e4bfb919c1f427fa483d5d890a53ad8ee409fa44f8bd1ca0a0b6ddedbe"} Dec 01 00:33:04 crc kubenswrapper[4846]: I1201 00:33:04.533839 4846 generic.go:334] "Generic (PLEG): container finished" podID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerID="c3e62b669e133a14fe197d4df9f2cc24681084f892ab054ae4daf9dd27a9f664" exitCode=0 Dec 01 00:33:04 crc kubenswrapper[4846]: I1201 00:33:04.533945 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"ea2fd04c-692b-48a9-85fc-021d56460a01","Type":"ContainerDied","Data":"c3e62b669e133a14fe197d4df9f2cc24681084f892ab054ae4daf9dd27a9f664"} Dec 01 00:33:05 crc kubenswrapper[4846]: I1201 00:33:05.548947 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_ea2fd04c-692b-48a9-85fc-021d56460a01/docker-build/0.log" Dec 01 00:33:05 crc kubenswrapper[4846]: I1201 00:33:05.550532 4846 generic.go:334] "Generic (PLEG): container finished" podID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerID="85df167dd4afe5ce9cf2946786953464d542d84d59fa0e247bd2719b81625254" exitCode=1 Dec 01 00:33:05 crc kubenswrapper[4846]: I1201 00:33:05.550578 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"ea2fd04c-692b-48a9-85fc-021d56460a01","Type":"ContainerDied","Data":"85df167dd4afe5ce9cf2946786953464d542d84d59fa0e247bd2719b81625254"} Dec 01 00:33:05 crc kubenswrapper[4846]: I1201 00:33:05.580904 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:33:05 crc kubenswrapper[4846]: E1201 00:33:05.581268 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.814864 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_ea2fd04c-692b-48a9-85fc-021d56460a01/docker-build/0.log" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.815948 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895653 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-pull\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895714 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-build-blob-cache\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895791 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-ca-bundles\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895811 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-buildworkdir\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895863 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-node-pullsecrets\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895879 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-system-configs\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895899 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwmvr\" (UniqueName: \"kubernetes.io/projected/ea2fd04c-692b-48a9-85fc-021d56460a01-kube-api-access-pwmvr\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895964 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-run\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.895993 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-push\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896024 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-root\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896028 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896134 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-buildcachedir\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896171 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-proxy-ca-bundles\") pod \"ea2fd04c-692b-48a9-85fc-021d56460a01\" (UID: \"ea2fd04c-692b-48a9-85fc-021d56460a01\") " Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896523 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896658 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896732 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.896917 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.897073 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.897576 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.897621 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.898069 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.898427 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.911012 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.911025 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.911086 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2fd04c-692b-48a9-85fc-021d56460a01-kube-api-access-pwmvr" (OuterVolumeSpecName: "kube-api-access-pwmvr") pod "ea2fd04c-692b-48a9-85fc-021d56460a01" (UID: "ea2fd04c-692b-48a9-85fc-021d56460a01"). InnerVolumeSpecName "kube-api-access-pwmvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997279 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997315 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwmvr\" (UniqueName: \"kubernetes.io/projected/ea2fd04c-692b-48a9-85fc-021d56460a01-kube-api-access-pwmvr\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997326 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997335 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997344 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997356 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ea2fd04c-692b-48a9-85fc-021d56460a01-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997366 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997375 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ea2fd04c-692b-48a9-85fc-021d56460a01-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997384 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997392 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea2fd04c-692b-48a9-85fc-021d56460a01-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:06 crc kubenswrapper[4846]: I1201 00:33:06.997400 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ea2fd04c-692b-48a9-85fc-021d56460a01-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:07 crc kubenswrapper[4846]: I1201 00:33:07.574945 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_ea2fd04c-692b-48a9-85fc-021d56460a01/docker-build/0.log" Dec 01 00:33:07 crc kubenswrapper[4846]: I1201 00:33:07.575923 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"ea2fd04c-692b-48a9-85fc-021d56460a01","Type":"ContainerDied","Data":"40c690e4bfb919c1f427fa483d5d890a53ad8ee409fa44f8bd1ca0a0b6ddedbe"} Dec 01 00:33:07 crc kubenswrapper[4846]: I1201 00:33:07.575992 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c690e4bfb919c1f427fa483d5d890a53ad8ee409fa44f8bd1ca0a0b6ddedbe" Dec 01 00:33:07 crc kubenswrapper[4846]: I1201 00:33:07.576038 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Dec 01 00:33:13 crc kubenswrapper[4846]: I1201 00:33:13.216627 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 01 00:33:13 crc kubenswrapper[4846]: I1201 00:33:13.223318 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Dec 01 00:33:13 crc kubenswrapper[4846]: I1201 00:33:13.588872 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2fd04c-692b-48a9-85fc-021d56460a01" path="/var/lib/kubelet/pods/ea2fd04c-692b-48a9-85fc-021d56460a01/volumes" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.803752 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 01 00:33:14 crc kubenswrapper[4846]: E1201 00:33:14.804990 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerName="manage-dockerfile" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.805027 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerName="manage-dockerfile" Dec 01 00:33:14 crc kubenswrapper[4846]: E1201 00:33:14.805045 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerName="docker-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.805052 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerName="docker-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.805161 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2fd04c-692b-48a9-85fc-021d56460a01" containerName="docker-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.805975 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.808506 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.808605 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.808659 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.812969 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.819176 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904398 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904466 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904498 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904519 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904544 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904570 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904738 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904764 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904784 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904807 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904830 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:14 crc kubenswrapper[4846]: I1201 00:33:14.904958 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmz8x\" (UniqueName: \"kubernetes.io/projected/0e4b058f-c51f-4de4-97df-b3a40b252619-kube-api-access-jmz8x\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007190 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007273 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007315 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007351 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007380 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmz8x\" (UniqueName: \"kubernetes.io/projected/0e4b058f-c51f-4de4-97df-b3a40b252619-kube-api-access-jmz8x\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007453 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007476 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007481 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007511 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.007991 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008049 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008093 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008154 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008219 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008526 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008910 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.008971 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.009161 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.009570 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.009667 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.009763 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.017226 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.020338 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.029153 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmz8x\" (UniqueName: \"kubernetes.io/projected/0e4b058f-c51f-4de4-97df-b3a40b252619-kube-api-access-jmz8x\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.127206 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.564362 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Dec 01 00:33:15 crc kubenswrapper[4846]: W1201 00:33:15.576384 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e4b058f_c51f_4de4_97df_b3a40b252619.slice/crio-c1fb75e12108011c074f71c74edf635efc64e450488701d73f905b0aef3e3745 WatchSource:0}: Error finding container c1fb75e12108011c074f71c74edf635efc64e450488701d73f905b0aef3e3745: Status 404 returned error can't find the container with id c1fb75e12108011c074f71c74edf635efc64e450488701d73f905b0aef3e3745 Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.637374 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerStarted","Data":"c1fb75e12108011c074f71c74edf635efc64e450488701d73f905b0aef3e3745"} Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.973086 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kdfw"] Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.974363 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:15 crc kubenswrapper[4846]: I1201 00:33:15.988812 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kdfw"] Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.024480 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-catalog-content\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.024565 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-utilities\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.024593 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfjz\" (UniqueName: \"kubernetes.io/projected/32ea411c-3594-4377-bf6a-64589e4a533c-kube-api-access-6qfjz\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.126389 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-catalog-content\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.127359 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-utilities\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.127636 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfjz\" (UniqueName: \"kubernetes.io/projected/32ea411c-3594-4377-bf6a-64589e4a533c-kube-api-access-6qfjz\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.127228 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-catalog-content\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.128110 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-utilities\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.146283 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfjz\" (UniqueName: \"kubernetes.io/projected/32ea411c-3594-4377-bf6a-64589e4a533c-kube-api-access-6qfjz\") pod \"community-operators-5kdfw\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.297935 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.562950 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kdfw"] Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.645835 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerStarted","Data":"9bede672a36c82f0342a599a18ed46be1e023621eb87e98419c8758f9ddc8df9"} Dec 01 00:33:16 crc kubenswrapper[4846]: I1201 00:33:16.650712 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kdfw" event={"ID":"32ea411c-3594-4377-bf6a-64589e4a533c","Type":"ContainerStarted","Data":"a554181964158e359a33d936f7b0c9bb2d4d5ae19dea16a8e93274a81a95f57f"} Dec 01 00:33:17 crc kubenswrapper[4846]: I1201 00:33:17.657866 4846 generic.go:334] "Generic (PLEG): container finished" podID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerID="9bede672a36c82f0342a599a18ed46be1e023621eb87e98419c8758f9ddc8df9" exitCode=0 Dec 01 00:33:17 crc kubenswrapper[4846]: I1201 00:33:17.657952 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerDied","Data":"9bede672a36c82f0342a599a18ed46be1e023621eb87e98419c8758f9ddc8df9"} Dec 01 00:33:17 crc kubenswrapper[4846]: I1201 00:33:17.659263 4846 generic.go:334] "Generic (PLEG): container finished" podID="32ea411c-3594-4377-bf6a-64589e4a533c" containerID="780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91" exitCode=0 Dec 01 00:33:17 crc kubenswrapper[4846]: I1201 00:33:17.659288 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kdfw" event={"ID":"32ea411c-3594-4377-bf6a-64589e4a533c","Type":"ContainerDied","Data":"780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91"} Dec 01 00:33:18 crc kubenswrapper[4846]: I1201 00:33:18.672761 4846 generic.go:334] "Generic (PLEG): container finished" podID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerID="e88267581d4e032aa0cd1bfd04239d93dc38778f16babb808e7e1363f447c667" exitCode=0 Dec 01 00:33:18 crc kubenswrapper[4846]: I1201 00:33:18.672870 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerDied","Data":"e88267581d4e032aa0cd1bfd04239d93dc38778f16babb808e7e1363f447c667"} Dec 01 00:33:18 crc kubenswrapper[4846]: I1201 00:33:18.714234 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_0e4b058f-c51f-4de4-97df-b3a40b252619/manage-dockerfile/0.log" Dec 01 00:33:19 crc kubenswrapper[4846]: I1201 00:33:19.584774 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:33:19 crc kubenswrapper[4846]: E1201 00:33:19.585059 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:33:19 crc kubenswrapper[4846]: I1201 00:33:19.685621 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerStarted","Data":"c34e20b8b5392799142608b090b6aa191fd9344c9d44f9da8de3be8851b82ae6"} Dec 01 00:33:19 crc kubenswrapper[4846]: I1201 00:33:19.690584 4846 generic.go:334] "Generic (PLEG): container finished" podID="32ea411c-3594-4377-bf6a-64589e4a533c" containerID="758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53" exitCode=0 Dec 01 00:33:19 crc kubenswrapper[4846]: I1201 00:33:19.690631 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kdfw" event={"ID":"32ea411c-3594-4377-bf6a-64589e4a533c","Type":"ContainerDied","Data":"758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53"} Dec 01 00:33:19 crc kubenswrapper[4846]: I1201 00:33:19.719380 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.719365257 podStartE2EDuration="5.719365257s" podCreationTimestamp="2025-12-01 00:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:33:19.714892029 +0000 UTC m=+1620.495661113" watchObservedRunningTime="2025-12-01 00:33:19.719365257 +0000 UTC m=+1620.500134331" Dec 01 00:33:21 crc kubenswrapper[4846]: I1201 00:33:21.707841 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kdfw" event={"ID":"32ea411c-3594-4377-bf6a-64589e4a533c","Type":"ContainerStarted","Data":"3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda"} Dec 01 00:33:21 crc kubenswrapper[4846]: I1201 00:33:21.731939 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kdfw" podStartSLOduration=3.833535191 podStartE2EDuration="6.731916396s" podCreationTimestamp="2025-12-01 00:33:15 +0000 UTC" firstStartedPulling="2025-12-01 00:33:17.660438967 +0000 UTC m=+1618.441208041" lastFinishedPulling="2025-12-01 00:33:20.558820172 +0000 UTC m=+1621.339589246" observedRunningTime="2025-12-01 00:33:21.730253735 +0000 UTC m=+1622.511022829" watchObservedRunningTime="2025-12-01 00:33:21.731916396 +0000 UTC m=+1622.512685470" Dec 01 00:33:22 crc kubenswrapper[4846]: I1201 00:33:22.717202 4846 generic.go:334] "Generic (PLEG): container finished" podID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerID="c34e20b8b5392799142608b090b6aa191fd9344c9d44f9da8de3be8851b82ae6" exitCode=0 Dec 01 00:33:22 crc kubenswrapper[4846]: I1201 00:33:22.717256 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerDied","Data":"c34e20b8b5392799142608b090b6aa191fd9344c9d44f9da8de3be8851b82ae6"} Dec 01 00:33:23 crc kubenswrapper[4846]: I1201 00:33:23.958944 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.039867 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmz8x\" (UniqueName: \"kubernetes.io/projected/0e4b058f-c51f-4de4-97df-b3a40b252619-kube-api-access-jmz8x\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.039928 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-pull\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.039945 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-proxy-ca-bundles\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.039973 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-root\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.039990 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-node-pullsecrets\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040010 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-run\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040034 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-ca-bundles\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040068 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-push\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040083 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-build-blob-cache\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040105 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-buildworkdir\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040125 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-system-configs\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040142 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-buildcachedir\") pod \"0e4b058f-c51f-4de4-97df-b3a40b252619\" (UID: \"0e4b058f-c51f-4de4-97df-b3a40b252619\") " Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040341 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040893 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.040918 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.041536 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.041828 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.041966 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.041994 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.043171 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.045960 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.046747 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4b058f-c51f-4de4-97df-b3a40b252619-kube-api-access-jmz8x" (OuterVolumeSpecName: "kube-api-access-jmz8x") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "kube-api-access-jmz8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.047664 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.048021 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0e4b058f-c51f-4de4-97df-b3a40b252619" (UID: "0e4b058f-c51f-4de4-97df-b3a40b252619"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141594 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141640 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141653 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmz8x\" (UniqueName: \"kubernetes.io/projected/0e4b058f-c51f-4de4-97df-b3a40b252619-kube-api-access-jmz8x\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141666 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141764 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141782 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141794 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0e4b058f-c51f-4de4-97df-b3a40b252619-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141805 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141819 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e4b058f-c51f-4de4-97df-b3a40b252619-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141848 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/0e4b058f-c51f-4de4-97df-b3a40b252619-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141865 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.141877 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0e4b058f-c51f-4de4-97df-b3a40b252619-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.733043 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"0e4b058f-c51f-4de4-97df-b3a40b252619","Type":"ContainerDied","Data":"c1fb75e12108011c074f71c74edf635efc64e450488701d73f905b0aef3e3745"} Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.733089 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fb75e12108011c074f71c74edf635efc64e450488701d73f905b0aef3e3745" Dec 01 00:33:24 crc kubenswrapper[4846]: I1201 00:33:24.733201 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Dec 01 00:33:26 crc kubenswrapper[4846]: I1201 00:33:26.298483 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:26 crc kubenswrapper[4846]: I1201 00:33:26.298611 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:26 crc kubenswrapper[4846]: I1201 00:33:26.337824 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:26 crc kubenswrapper[4846]: I1201 00:33:26.791970 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:26 crc kubenswrapper[4846]: I1201 00:33:26.845197 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kdfw"] Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.291189 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 01 00:33:28 crc kubenswrapper[4846]: E1201 00:33:28.291785 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="git-clone" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.291802 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="git-clone" Dec 01 00:33:28 crc kubenswrapper[4846]: E1201 00:33:28.291818 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="docker-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.291825 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="docker-build" Dec 01 00:33:28 crc kubenswrapper[4846]: E1201 00:33:28.291835 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="manage-dockerfile" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.291842 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="manage-dockerfile" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.291946 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4b058f-c51f-4de4-97df-b3a40b252619" containerName="docker-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.292555 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.294418 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.298212 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.298416 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.298724 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.304797 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404116 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404191 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404224 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404252 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404313 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcsz\" (UniqueName: \"kubernetes.io/projected/ca653589-d516-4c0b-a838-0805783e7659-kube-api-access-mtcsz\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404330 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404346 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404474 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404640 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404676 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404741 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.404782 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506281 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506361 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506385 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506402 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506422 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtcsz\" (UniqueName: \"kubernetes.io/projected/ca653589-d516-4c0b-a838-0805783e7659-kube-api-access-mtcsz\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506441 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506458 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506480 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.506839 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507003 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507129 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507217 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507259 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507287 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507448 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507473 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507666 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507721 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507743 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.507786 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.508017 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.512521 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.512808 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.526821 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtcsz\" (UniqueName: \"kubernetes.io/projected/ca653589-d516-4c0b-a838-0805783e7659-kube-api-access-mtcsz\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.608294 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:28 crc kubenswrapper[4846]: I1201 00:33:28.759844 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kdfw" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="registry-server" containerID="cri-o://3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda" gracePeriod=2 Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.003085 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.071811 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.114334 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qfjz\" (UniqueName: \"kubernetes.io/projected/32ea411c-3594-4377-bf6a-64589e4a533c-kube-api-access-6qfjz\") pod \"32ea411c-3594-4377-bf6a-64589e4a533c\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.114463 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-catalog-content\") pod \"32ea411c-3594-4377-bf6a-64589e4a533c\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.114557 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-utilities\") pod \"32ea411c-3594-4377-bf6a-64589e4a533c\" (UID: \"32ea411c-3594-4377-bf6a-64589e4a533c\") " Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.116317 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-utilities" (OuterVolumeSpecName: "utilities") pod "32ea411c-3594-4377-bf6a-64589e4a533c" (UID: "32ea411c-3594-4377-bf6a-64589e4a533c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.119798 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ea411c-3594-4377-bf6a-64589e4a533c-kube-api-access-6qfjz" (OuterVolumeSpecName: "kube-api-access-6qfjz") pod "32ea411c-3594-4377-bf6a-64589e4a533c" (UID: "32ea411c-3594-4377-bf6a-64589e4a533c"). InnerVolumeSpecName "kube-api-access-6qfjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.167904 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ea411c-3594-4377-bf6a-64589e4a533c" (UID: "32ea411c-3594-4377-bf6a-64589e4a533c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.216522 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qfjz\" (UniqueName: \"kubernetes.io/projected/32ea411c-3594-4377-bf6a-64589e4a533c-kube-api-access-6qfjz\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.216550 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.216561 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ea411c-3594-4377-bf6a-64589e4a533c-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.765534 4846 generic.go:334] "Generic (PLEG): container finished" podID="ca653589-d516-4c0b-a838-0805783e7659" containerID="7f8a082eaec5f564874dbc2e5dd2d57e66c6b026a60e65edd0d533bfeffc4228" exitCode=0 Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.765588 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"ca653589-d516-4c0b-a838-0805783e7659","Type":"ContainerDied","Data":"7f8a082eaec5f564874dbc2e5dd2d57e66c6b026a60e65edd0d533bfeffc4228"} Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.765837 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"ca653589-d516-4c0b-a838-0805783e7659","Type":"ContainerStarted","Data":"8dfcbfe80a12ba3f19cae3621f5a9cffcd2783b5a1da41e3e41ae2d85b66a037"} Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.767899 4846 generic.go:334] "Generic (PLEG): container finished" podID="32ea411c-3594-4377-bf6a-64589e4a533c" containerID="3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda" exitCode=0 Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.767941 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kdfw" event={"ID":"32ea411c-3594-4377-bf6a-64589e4a533c","Type":"ContainerDied","Data":"3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda"} Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.767959 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kdfw" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.767972 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kdfw" event={"ID":"32ea411c-3594-4377-bf6a-64589e4a533c","Type":"ContainerDied","Data":"a554181964158e359a33d936f7b0c9bb2d4d5ae19dea16a8e93274a81a95f57f"} Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.767994 4846 scope.go:117] "RemoveContainer" containerID="3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.796782 4846 scope.go:117] "RemoveContainer" containerID="758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.803951 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kdfw"] Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.810468 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kdfw"] Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.844975 4846 scope.go:117] "RemoveContainer" containerID="780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.861889 4846 scope.go:117] "RemoveContainer" containerID="3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda" Dec 01 00:33:29 crc kubenswrapper[4846]: E1201 00:33:29.862521 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda\": container with ID starting with 3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda not found: ID does not exist" containerID="3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.862558 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda"} err="failed to get container status \"3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda\": rpc error: code = NotFound desc = could not find container \"3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda\": container with ID starting with 3e2a0f9fc3b3a439b06bc4adfd6e3e214312fe8834e8b3e5b7f30697b965afda not found: ID does not exist" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.862592 4846 scope.go:117] "RemoveContainer" containerID="758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53" Dec 01 00:33:29 crc kubenswrapper[4846]: E1201 00:33:29.863159 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53\": container with ID starting with 758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53 not found: ID does not exist" containerID="758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.863218 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53"} err="failed to get container status \"758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53\": rpc error: code = NotFound desc = could not find container \"758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53\": container with ID starting with 758a0b73b0421425c95e3ad21fd1e74acfbaa93d1d4865fb9970603b9b7fad53 not found: ID does not exist" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.863257 4846 scope.go:117] "RemoveContainer" containerID="780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91" Dec 01 00:33:29 crc kubenswrapper[4846]: E1201 00:33:29.864753 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91\": container with ID starting with 780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91 not found: ID does not exist" containerID="780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91" Dec 01 00:33:29 crc kubenswrapper[4846]: I1201 00:33:29.864835 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91"} err="failed to get container status \"780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91\": rpc error: code = NotFound desc = could not find container \"780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91\": container with ID starting with 780a5536a43c3ac1d015fd7cd41951134f014a39cacb22a11eb3317bfdd98a91 not found: ID does not exist" Dec 01 00:33:30 crc kubenswrapper[4846]: E1201 00:33:30.537942 4846 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca653589_d516_4c0b_a838_0805783e7659.slice/crio-conmon-61a07c2c951cafde52a2a52fbf76181586ea6f93a60b3e03da1f13cabbe4c303.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca653589_d516_4c0b_a838_0805783e7659.slice/crio-61a07c2c951cafde52a2a52fbf76181586ea6f93a60b3e03da1f13cabbe4c303.scope\": RecentStats: unable to find data in memory cache]" Dec 01 00:33:30 crc kubenswrapper[4846]: I1201 00:33:30.778983 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_ca653589-d516-4c0b-a838-0805783e7659/docker-build/0.log" Dec 01 00:33:30 crc kubenswrapper[4846]: I1201 00:33:30.779493 4846 generic.go:334] "Generic (PLEG): container finished" podID="ca653589-d516-4c0b-a838-0805783e7659" containerID="61a07c2c951cafde52a2a52fbf76181586ea6f93a60b3e03da1f13cabbe4c303" exitCode=1 Dec 01 00:33:30 crc kubenswrapper[4846]: I1201 00:33:30.779541 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"ca653589-d516-4c0b-a838-0805783e7659","Type":"ContainerDied","Data":"61a07c2c951cafde52a2a52fbf76181586ea6f93a60b3e03da1f13cabbe4c303"} Dec 01 00:33:31 crc kubenswrapper[4846]: I1201 00:33:31.581055 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:33:31 crc kubenswrapper[4846]: E1201 00:33:31.581458 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:33:31 crc kubenswrapper[4846]: I1201 00:33:31.588528 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" path="/var/lib/kubelet/pods/32ea411c-3594-4377-bf6a-64589e4a533c/volumes" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.066560 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_ca653589-d516-4c0b-a838-0805783e7659/docker-build/0.log" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.067150 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157569 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-run\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157625 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-build-blob-cache\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157693 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-buildworkdir\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157712 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-push\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157737 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtcsz\" (UniqueName: \"kubernetes.io/projected/ca653589-d516-4c0b-a838-0805783e7659-kube-api-access-mtcsz\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157792 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-proxy-ca-bundles\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157820 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-ca-bundles\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157846 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-buildcachedir\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157878 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-root\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157897 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-pull\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157928 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-node-pullsecrets\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.157949 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-system-configs\") pod \"ca653589-d516-4c0b-a838-0805783e7659\" (UID: \"ca653589-d516-4c0b-a838-0805783e7659\") " Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.158407 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.158478 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.158530 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.158925 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.159028 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.159404 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.160303 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.160401 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.161125 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.166661 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.166807 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca653589-d516-4c0b-a838-0805783e7659-kube-api-access-mtcsz" (OuterVolumeSpecName: "kube-api-access-mtcsz") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "kube-api-access-mtcsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.167145 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "ca653589-d516-4c0b-a838-0805783e7659" (UID: "ca653589-d516-4c0b-a838-0805783e7659"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260229 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260268 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260277 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260285 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260294 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260302 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca653589-d516-4c0b-a838-0805783e7659-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260312 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ca653589-d516-4c0b-a838-0805783e7659-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260320 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260328 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260337 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ca653589-d516-4c0b-a838-0805783e7659-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260345 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/ca653589-d516-4c0b-a838-0805783e7659-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.260355 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtcsz\" (UniqueName: \"kubernetes.io/projected/ca653589-d516-4c0b-a838-0805783e7659-kube-api-access-mtcsz\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.796035 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_ca653589-d516-4c0b-a838-0805783e7659/docker-build/0.log" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.796594 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"ca653589-d516-4c0b-a838-0805783e7659","Type":"ContainerDied","Data":"8dfcbfe80a12ba3f19cae3621f5a9cffcd2783b5a1da41e3e41ae2d85b66a037"} Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.796634 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dfcbfe80a12ba3f19cae3621f5a9cffcd2783b5a1da41e3e41ae2d85b66a037" Dec 01 00:33:32 crc kubenswrapper[4846]: I1201 00:33:32.796647 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Dec 01 00:33:38 crc kubenswrapper[4846]: I1201 00:33:38.769852 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 01 00:33:38 crc kubenswrapper[4846]: I1201 00:33:38.775138 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Dec 01 00:33:39 crc kubenswrapper[4846]: I1201 00:33:39.589069 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca653589-d516-4c0b-a838-0805783e7659" path="/var/lib/kubelet/pods/ca653589-d516-4c0b-a838-0805783e7659/volumes" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407056 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 01 00:33:40 crc kubenswrapper[4846]: E1201 00:33:40.407595 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="extract-utilities" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407608 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="extract-utilities" Dec 01 00:33:40 crc kubenswrapper[4846]: E1201 00:33:40.407622 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="extract-content" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407630 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="extract-content" Dec 01 00:33:40 crc kubenswrapper[4846]: E1201 00:33:40.407655 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="registry-server" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407663 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="registry-server" Dec 01 00:33:40 crc kubenswrapper[4846]: E1201 00:33:40.407694 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca653589-d516-4c0b-a838-0805783e7659" containerName="manage-dockerfile" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407702 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca653589-d516-4c0b-a838-0805783e7659" containerName="manage-dockerfile" Dec 01 00:33:40 crc kubenswrapper[4846]: E1201 00:33:40.407713 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca653589-d516-4c0b-a838-0805783e7659" containerName="docker-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407720 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca653589-d516-4c0b-a838-0805783e7659" containerName="docker-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407838 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ea411c-3594-4377-bf6a-64589e4a533c" containerName="registry-server" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.407860 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca653589-d516-4c0b-a838-0805783e7659" containerName="docker-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.408817 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.410873 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.410891 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.411141 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.422629 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.429167 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468434 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468518 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ldz\" (UniqueName: \"kubernetes.io/projected/155e4576-7e93-4e9b-acc1-ededbae0df1a-kube-api-access-t7ldz\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468538 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468630 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468703 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468755 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468783 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468819 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468846 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468864 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468883 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.468901 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570058 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570137 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570596 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570155 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570664 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570701 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570731 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570768 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7ldz\" (UniqueName: \"kubernetes.io/projected/155e4576-7e93-4e9b-acc1-ededbae0df1a-kube-api-access-t7ldz\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570794 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570818 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570838 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570855 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.570879 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.571110 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.571156 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.571172 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.571617 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.571826 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.572191 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.572976 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.575572 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.579905 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.583617 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.593023 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7ldz\" (UniqueName: \"kubernetes.io/projected/155e4576-7e93-4e9b-acc1-ededbae0df1a-kube-api-access-t7ldz\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.726966 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:40 crc kubenswrapper[4846]: I1201 00:33:40.943305 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Dec 01 00:33:41 crc kubenswrapper[4846]: I1201 00:33:41.864441 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerStarted","Data":"ed1a27ee04c31b18fd566639517f61ee89962f3663e1a54f69429de054c38923"} Dec 01 00:33:41 crc kubenswrapper[4846]: I1201 00:33:41.864745 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerStarted","Data":"07fd665c389ff0a0d340899ae3eb2c3f00b385ae22e2b546d316857222d5d2b7"} Dec 01 00:33:42 crc kubenswrapper[4846]: I1201 00:33:42.872958 4846 generic.go:334] "Generic (PLEG): container finished" podID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerID="ed1a27ee04c31b18fd566639517f61ee89962f3663e1a54f69429de054c38923" exitCode=0 Dec 01 00:33:42 crc kubenswrapper[4846]: I1201 00:33:42.873054 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerDied","Data":"ed1a27ee04c31b18fd566639517f61ee89962f3663e1a54f69429de054c38923"} Dec 01 00:33:43 crc kubenswrapper[4846]: I1201 00:33:43.882737 4846 generic.go:334] "Generic (PLEG): container finished" podID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerID="9f8ac3fe493180a22be939d6dfe5b1e27077154719a8d9725279c70a6463d5f3" exitCode=0 Dec 01 00:33:43 crc kubenswrapper[4846]: I1201 00:33:43.882791 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerDied","Data":"9f8ac3fe493180a22be939d6dfe5b1e27077154719a8d9725279c70a6463d5f3"} Dec 01 00:33:43 crc kubenswrapper[4846]: I1201 00:33:43.913032 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_155e4576-7e93-4e9b-acc1-ededbae0df1a/manage-dockerfile/0.log" Dec 01 00:33:44 crc kubenswrapper[4846]: I1201 00:33:44.893358 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerStarted","Data":"f5f492005a45f89427dfd9ad2ab7334946dc827fc496e8032b310f631378e9fe"} Dec 01 00:33:44 crc kubenswrapper[4846]: I1201 00:33:44.929018 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=4.928996876 podStartE2EDuration="4.928996876s" podCreationTimestamp="2025-12-01 00:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:33:44.926510369 +0000 UTC m=+1645.707279433" watchObservedRunningTime="2025-12-01 00:33:44.928996876 +0000 UTC m=+1645.709765950" Dec 01 00:33:45 crc kubenswrapper[4846]: I1201 00:33:45.581301 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:33:45 crc kubenswrapper[4846]: E1201 00:33:45.581612 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:33:47 crc kubenswrapper[4846]: I1201 00:33:47.918181 4846 generic.go:334] "Generic (PLEG): container finished" podID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerID="f5f492005a45f89427dfd9ad2ab7334946dc827fc496e8032b310f631378e9fe" exitCode=0 Dec 01 00:33:47 crc kubenswrapper[4846]: I1201 00:33:47.918245 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerDied","Data":"f5f492005a45f89427dfd9ad2ab7334946dc827fc496e8032b310f631378e9fe"} Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.203185 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308514 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-node-pullsecrets\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308582 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-proxy-ca-bundles\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308650 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-pull\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308708 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-root\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308646 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308794 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-run\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308817 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-system-configs\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308856 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-push\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308896 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-blob-cache\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308928 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-ca-bundles\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308952 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildcachedir\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.308973 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildworkdir\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.309007 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7ldz\" (UniqueName: \"kubernetes.io/projected/155e4576-7e93-4e9b-acc1-ededbae0df1a-kube-api-access-t7ldz\") pod \"155e4576-7e93-4e9b-acc1-ededbae0df1a\" (UID: \"155e4576-7e93-4e9b-acc1-ededbae0df1a\") " Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.309092 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.309365 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.309382 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/155e4576-7e93-4e9b-acc1-ededbae0df1a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.311282 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.311544 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.311989 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.312097 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.312971 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.314342 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.316708 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.316747 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155e4576-7e93-4e9b-acc1-ededbae0df1a-kube-api-access-t7ldz" (OuterVolumeSpecName: "kube-api-access-t7ldz") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "kube-api-access-t7ldz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.316863 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.317126 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "155e4576-7e93-4e9b-acc1-ededbae0df1a" (UID: "155e4576-7e93-4e9b-acc1-ededbae0df1a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410467 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410534 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410547 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410560 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410571 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410588 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410599 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7ldz\" (UniqueName: \"kubernetes.io/projected/155e4576-7e93-4e9b-acc1-ededbae0df1a-kube-api-access-t7ldz\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410611 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/155e4576-7e93-4e9b-acc1-ededbae0df1a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410621 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/155e4576-7e93-4e9b-acc1-ededbae0df1a-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.410634 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/155e4576-7e93-4e9b-acc1-ededbae0df1a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.933781 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"155e4576-7e93-4e9b-acc1-ededbae0df1a","Type":"ContainerDied","Data":"07fd665c389ff0a0d340899ae3eb2c3f00b385ae22e2b546d316857222d5d2b7"} Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.934107 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fd665c389ff0a0d340899ae3eb2c3f00b385ae22e2b546d316857222d5d2b7" Dec 01 00:33:49 crc kubenswrapper[4846]: I1201 00:33:49.933905 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Dec 01 00:33:56 crc kubenswrapper[4846]: I1201 00:33:56.580208 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:33:56 crc kubenswrapper[4846]: E1201 00:33:56.581716 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.767476 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 01 00:34:05 crc kubenswrapper[4846]: E1201 00:34:05.768189 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="manage-dockerfile" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.768201 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="manage-dockerfile" Dec 01 00:34:05 crc kubenswrapper[4846]: E1201 00:34:05.768211 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="docker-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.768216 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="docker-build" Dec 01 00:34:05 crc kubenswrapper[4846]: E1201 00:34:05.768225 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="git-clone" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.768230 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="git-clone" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.768328 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="155e4576-7e93-4e9b-acc1-ededbae0df1a" containerName="docker-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.769128 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.770999 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.771127 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.771305 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ql88j" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.771860 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.773587 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.798774 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857213 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857264 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857294 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857359 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsnb\" (UniqueName: \"kubernetes.io/projected/e7baad33-559f-44ee-bae2-57ff8591519f-kube-api-access-phsnb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857389 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857413 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857431 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857532 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857600 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857663 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857700 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857749 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.857764 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958770 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958817 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958843 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958859 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958886 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958918 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phsnb\" (UniqueName: \"kubernetes.io/projected/e7baad33-559f-44ee-bae2-57ff8591519f-kube-api-access-phsnb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958937 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958955 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958969 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.958994 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959016 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959035 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959056 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959578 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959676 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959756 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.959813 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.960036 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.960145 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.960252 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.960455 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.961828 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.965950 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.968045 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.969256 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:05 crc kubenswrapper[4846]: I1201 00:34:05.978980 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phsnb\" (UniqueName: \"kubernetes.io/projected/e7baad33-559f-44ee-bae2-57ff8591519f-kube-api-access-phsnb\") pod \"service-telemetry-framework-index-1-build\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:06 crc kubenswrapper[4846]: I1201 00:34:06.090194 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:06 crc kubenswrapper[4846]: I1201 00:34:06.500994 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 01 00:34:07 crc kubenswrapper[4846]: I1201 00:34:07.067491 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerStarted","Data":"1ff8d4d7b25e3370792c37fd944936cfc5633f358b7dade718107078a9f6fa16"} Dec 01 00:34:07 crc kubenswrapper[4846]: I1201 00:34:07.068062 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerStarted","Data":"11f6450458f6a8939bac9d7dfb4add863acc6dcb27ed800f874113ecf6a93512"} Dec 01 00:34:08 crc kubenswrapper[4846]: I1201 00:34:08.074112 4846 generic.go:334] "Generic (PLEG): container finished" podID="e7baad33-559f-44ee-bae2-57ff8591519f" containerID="1ff8d4d7b25e3370792c37fd944936cfc5633f358b7dade718107078a9f6fa16" exitCode=0 Dec 01 00:34:08 crc kubenswrapper[4846]: I1201 00:34:08.074155 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerDied","Data":"1ff8d4d7b25e3370792c37fd944936cfc5633f358b7dade718107078a9f6fa16"} Dec 01 00:34:09 crc kubenswrapper[4846]: I1201 00:34:09.082727 4846 generic.go:334] "Generic (PLEG): container finished" podID="e7baad33-559f-44ee-bae2-57ff8591519f" containerID="5a4feb3896be64c6e5eaeadc60197bb4cf0b09915f7e54fd337a10945d699df4" exitCode=0 Dec 01 00:34:09 crc kubenswrapper[4846]: I1201 00:34:09.082778 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerDied","Data":"5a4feb3896be64c6e5eaeadc60197bb4cf0b09915f7e54fd337a10945d699df4"} Dec 01 00:34:09 crc kubenswrapper[4846]: I1201 00:34:09.155403 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_e7baad33-559f-44ee-bae2-57ff8591519f/manage-dockerfile/0.log" Dec 01 00:34:09 crc kubenswrapper[4846]: I1201 00:34:09.588530 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:34:09 crc kubenswrapper[4846]: E1201 00:34:09.589229 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:34:10 crc kubenswrapper[4846]: I1201 00:34:10.094183 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerStarted","Data":"a2abe88fa6f987cbb2337fd5543caa3bb636769c187418cee50fbba5db2798fa"} Dec 01 00:34:10 crc kubenswrapper[4846]: I1201 00:34:10.138263 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.138240083 podStartE2EDuration="5.138240083s" podCreationTimestamp="2025-12-01 00:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:34:10.133448553 +0000 UTC m=+1670.914217657" watchObservedRunningTime="2025-12-01 00:34:10.138240083 +0000 UTC m=+1670.919009167" Dec 01 00:34:22 crc kubenswrapper[4846]: I1201 00:34:22.581208 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:34:22 crc kubenswrapper[4846]: E1201 00:34:22.582272 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:34:33 crc kubenswrapper[4846]: I1201 00:34:33.580061 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:34:33 crc kubenswrapper[4846]: E1201 00:34:33.580825 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:34:40 crc kubenswrapper[4846]: I1201 00:34:40.292519 4846 generic.go:334] "Generic (PLEG): container finished" podID="e7baad33-559f-44ee-bae2-57ff8591519f" containerID="a2abe88fa6f987cbb2337fd5543caa3bb636769c187418cee50fbba5db2798fa" exitCode=0 Dec 01 00:34:40 crc kubenswrapper[4846]: I1201 00:34:40.292602 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerDied","Data":"a2abe88fa6f987cbb2337fd5543caa3bb636769c187418cee50fbba5db2798fa"} Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.533428 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623143 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-push\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623187 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phsnb\" (UniqueName: \"kubernetes.io/projected/e7baad33-559f-44ee-bae2-57ff8591519f-kube-api-access-phsnb\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623238 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-run\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623263 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-buildcachedir\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623286 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-node-pullsecrets\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623322 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-system-configs\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623366 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-ca-bundles\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623385 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-pull\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623420 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623451 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-root\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623468 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-buildworkdir\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623488 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-build-blob-cache\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623524 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-proxy-ca-bundles\") pod \"e7baad33-559f-44ee-bae2-57ff8591519f\" (UID: \"e7baad33-559f-44ee-bae2-57ff8591519f\") " Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.623859 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.624070 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.624507 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.624942 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.625350 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.625486 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.629888 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.640706 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.640809 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-pull" (OuterVolumeSpecName: "builder-dockercfg-ql88j-pull") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "builder-dockercfg-ql88j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.640826 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-push" (OuterVolumeSpecName: "builder-dockercfg-ql88j-push") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "builder-dockercfg-ql88j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.640732 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7baad33-559f-44ee-bae2-57ff8591519f-kube-api-access-phsnb" (OuterVolumeSpecName: "kube-api-access-phsnb") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "kube-api-access-phsnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725417 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725451 4846 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725460 4846 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7baad33-559f-44ee-bae2-57ff8591519f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725469 4846 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725482 4846 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725494 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-pull\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-pull\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725505 4846 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725515 4846 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725524 4846 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7baad33-559f-44ee-bae2-57ff8591519f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725534 4846 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ql88j-push\" (UniqueName: \"kubernetes.io/secret/e7baad33-559f-44ee-bae2-57ff8591519f-builder-dockercfg-ql88j-push\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.725543 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phsnb\" (UniqueName: \"kubernetes.io/projected/e7baad33-559f-44ee-bae2-57ff8591519f-kube-api-access-phsnb\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.856402 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:34:41 crc kubenswrapper[4846]: I1201 00:34:41.982523 4846 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:42 crc kubenswrapper[4846]: I1201 00:34:42.315042 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"e7baad33-559f-44ee-bae2-57ff8591519f","Type":"ContainerDied","Data":"11f6450458f6a8939bac9d7dfb4add863acc6dcb27ed800f874113ecf6a93512"} Dec 01 00:34:42 crc kubenswrapper[4846]: I1201 00:34:42.315114 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11f6450458f6a8939bac9d7dfb4add863acc6dcb27ed800f874113ecf6a93512" Dec 01 00:34:42 crc kubenswrapper[4846]: I1201 00:34:42.315158 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 01 00:34:42 crc kubenswrapper[4846]: I1201 00:34:42.958253 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e7baad33-559f-44ee-bae2-57ff8591519f" (UID: "e7baad33-559f-44ee-bae2-57ff8591519f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:34:42 crc kubenswrapper[4846]: I1201 00:34:42.996535 4846 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7baad33-559f-44ee-bae2-57ff8591519f-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.633971 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jj4xd"] Dec 01 00:34:43 crc kubenswrapper[4846]: E1201 00:34:43.634665 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="docker-build" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.634713 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="docker-build" Dec 01 00:34:43 crc kubenswrapper[4846]: E1201 00:34:43.634745 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="git-clone" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.634759 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="git-clone" Dec 01 00:34:43 crc kubenswrapper[4846]: E1201 00:34:43.634783 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="manage-dockerfile" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.634799 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="manage-dockerfile" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.635027 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7baad33-559f-44ee-bae2-57ff8591519f" containerName="docker-build" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.635793 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.641496 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jj4xd"] Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.642987 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-8nt9l" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.705743 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vslm\" (UniqueName: \"kubernetes.io/projected/bbcab12c-7b3a-4d8d-a51f-f386764d5cac-kube-api-access-6vslm\") pod \"infrawatch-operators-jj4xd\" (UID: \"bbcab12c-7b3a-4d8d-a51f-f386764d5cac\") " pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.807036 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vslm\" (UniqueName: \"kubernetes.io/projected/bbcab12c-7b3a-4d8d-a51f-f386764d5cac-kube-api-access-6vslm\") pod \"infrawatch-operators-jj4xd\" (UID: \"bbcab12c-7b3a-4d8d-a51f-f386764d5cac\") " pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.827359 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vslm\" (UniqueName: \"kubernetes.io/projected/bbcab12c-7b3a-4d8d-a51f-f386764d5cac-kube-api-access-6vslm\") pod \"infrawatch-operators-jj4xd\" (UID: \"bbcab12c-7b3a-4d8d-a51f-f386764d5cac\") " pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:43 crc kubenswrapper[4846]: I1201 00:34:43.969455 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:44 crc kubenswrapper[4846]: I1201 00:34:44.185042 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jj4xd"] Dec 01 00:34:44 crc kubenswrapper[4846]: I1201 00:34:44.331970 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jj4xd" event={"ID":"bbcab12c-7b3a-4d8d-a51f-f386764d5cac","Type":"ContainerStarted","Data":"a187749a3fe229d8fe00a18bc62e02a5378e0d476cd389c6a193542eabc7a28f"} Dec 01 00:34:45 crc kubenswrapper[4846]: I1201 00:34:45.580546 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:34:45 crc kubenswrapper[4846]: E1201 00:34:45.580843 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:34:46 crc kubenswrapper[4846]: I1201 00:34:46.405095 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jj4xd"] Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.218175 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-nzhxh"] Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.219169 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.223376 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-nzhxh"] Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.258883 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjp7r\" (UniqueName: \"kubernetes.io/projected/7198bf9a-8329-42f4-b37f-5012e511721a-kube-api-access-qjp7r\") pod \"infrawatch-operators-nzhxh\" (UID: \"7198bf9a-8329-42f4-b37f-5012e511721a\") " pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.359945 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjp7r\" (UniqueName: \"kubernetes.io/projected/7198bf9a-8329-42f4-b37f-5012e511721a-kube-api-access-qjp7r\") pod \"infrawatch-operators-nzhxh\" (UID: \"7198bf9a-8329-42f4-b37f-5012e511721a\") " pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.382031 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjp7r\" (UniqueName: \"kubernetes.io/projected/7198bf9a-8329-42f4-b37f-5012e511721a-kube-api-access-qjp7r\") pod \"infrawatch-operators-nzhxh\" (UID: \"7198bf9a-8329-42f4-b37f-5012e511721a\") " pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:47 crc kubenswrapper[4846]: I1201 00:34:47.580465 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:55 crc kubenswrapper[4846]: I1201 00:34:55.984135 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-nzhxh"] Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.416316 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nzhxh" event={"ID":"7198bf9a-8329-42f4-b37f-5012e511721a","Type":"ContainerStarted","Data":"a276bdbc3cecde2ad28a1c87501012a13b0d1d9a2feb54c7b4a3138a70c36736"} Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.416370 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nzhxh" event={"ID":"7198bf9a-8329-42f4-b37f-5012e511721a","Type":"ContainerStarted","Data":"7af06c14ad8144b5f11ed7971043c58b355b8f4814fd72f204a395a7f6540c3f"} Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.417824 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jj4xd" event={"ID":"bbcab12c-7b3a-4d8d-a51f-f386764d5cac","Type":"ContainerStarted","Data":"01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e"} Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.417944 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-jj4xd" podUID="bbcab12c-7b3a-4d8d-a51f-f386764d5cac" containerName="registry-server" containerID="cri-o://01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e" gracePeriod=2 Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.434233 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-nzhxh" podStartSLOduration=9.345331332 podStartE2EDuration="9.434215907s" podCreationTimestamp="2025-12-01 00:34:47 +0000 UTC" firstStartedPulling="2025-12-01 00:34:56.013634803 +0000 UTC m=+1716.794403877" lastFinishedPulling="2025-12-01 00:34:56.102519378 +0000 UTC m=+1716.883288452" observedRunningTime="2025-12-01 00:34:56.429441489 +0000 UTC m=+1717.210210603" watchObservedRunningTime="2025-12-01 00:34:56.434215907 +0000 UTC m=+1717.214984971" Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.444499 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jj4xd" podStartSLOduration=1.531225286 podStartE2EDuration="13.444480116s" podCreationTimestamp="2025-12-01 00:34:43 +0000 UTC" firstStartedPulling="2025-12-01 00:34:44.195173952 +0000 UTC m=+1704.975943026" lastFinishedPulling="2025-12-01 00:34:56.108428782 +0000 UTC m=+1716.889197856" observedRunningTime="2025-12-01 00:34:56.442870957 +0000 UTC m=+1717.223640051" watchObservedRunningTime="2025-12-01 00:34:56.444480116 +0000 UTC m=+1717.225249190" Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.580082 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:34:56 crc kubenswrapper[4846]: E1201 00:34:56.580330 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.754615 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.782924 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vslm\" (UniqueName: \"kubernetes.io/projected/bbcab12c-7b3a-4d8d-a51f-f386764d5cac-kube-api-access-6vslm\") pod \"bbcab12c-7b3a-4d8d-a51f-f386764d5cac\" (UID: \"bbcab12c-7b3a-4d8d-a51f-f386764d5cac\") " Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.789167 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcab12c-7b3a-4d8d-a51f-f386764d5cac-kube-api-access-6vslm" (OuterVolumeSpecName: "kube-api-access-6vslm") pod "bbcab12c-7b3a-4d8d-a51f-f386764d5cac" (UID: "bbcab12c-7b3a-4d8d-a51f-f386764d5cac"). InnerVolumeSpecName "kube-api-access-6vslm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:34:56 crc kubenswrapper[4846]: I1201 00:34:56.884122 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vslm\" (UniqueName: \"kubernetes.io/projected/bbcab12c-7b3a-4d8d-a51f-f386764d5cac-kube-api-access-6vslm\") on node \"crc\" DevicePath \"\"" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.426614 4846 generic.go:334] "Generic (PLEG): container finished" podID="bbcab12c-7b3a-4d8d-a51f-f386764d5cac" containerID="01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e" exitCode=0 Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.426672 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jj4xd" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.426722 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jj4xd" event={"ID":"bbcab12c-7b3a-4d8d-a51f-f386764d5cac","Type":"ContainerDied","Data":"01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e"} Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.426774 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jj4xd" event={"ID":"bbcab12c-7b3a-4d8d-a51f-f386764d5cac","Type":"ContainerDied","Data":"a187749a3fe229d8fe00a18bc62e02a5378e0d476cd389c6a193542eabc7a28f"} Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.426823 4846 scope.go:117] "RemoveContainer" containerID="01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.442635 4846 scope.go:117] "RemoveContainer" containerID="01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e" Dec 01 00:34:57 crc kubenswrapper[4846]: E1201 00:34:57.443142 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e\": container with ID starting with 01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e not found: ID does not exist" containerID="01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.443197 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e"} err="failed to get container status \"01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e\": rpc error: code = NotFound desc = could not find container \"01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e\": container with ID starting with 01b11fb815c8b6284fd1e4e468ee355437fa08c4a13280729bc6d928b82b664e not found: ID does not exist" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.454028 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jj4xd"] Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.460261 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-jj4xd"] Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.592537 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcab12c-7b3a-4d8d-a51f-f386764d5cac" path="/var/lib/kubelet/pods/bbcab12c-7b3a-4d8d-a51f-f386764d5cac/volumes" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.593277 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.593316 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:34:57 crc kubenswrapper[4846]: I1201 00:34:57.614975 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:35:07 crc kubenswrapper[4846]: I1201 00:35:07.613594 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-nzhxh" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.877811 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr"] Dec 01 00:35:08 crc kubenswrapper[4846]: E1201 00:35:08.878674 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcab12c-7b3a-4d8d-a51f-f386764d5cac" containerName="registry-server" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.878756 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcab12c-7b3a-4d8d-a51f-f386764d5cac" containerName="registry-server" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.878967 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcab12c-7b3a-4d8d-a51f-f386764d5cac" containerName="registry-server" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.880522 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.885950 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr"] Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.938630 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.938733 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:08 crc kubenswrapper[4846]: I1201 00:35:08.938791 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d79h\" (UniqueName: \"kubernetes.io/projected/b4649458-a012-402c-ba57-9dc9f05aca28-kube-api-access-9d79h\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.039479 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.039538 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d79h\" (UniqueName: \"kubernetes.io/projected/b4649458-a012-402c-ba57-9dc9f05aca28-kube-api-access-9d79h\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.039571 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.040077 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.040278 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.067922 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d79h\" (UniqueName: \"kubernetes.io/projected/b4649458-a012-402c-ba57-9dc9f05aca28-kube-api-access-9d79h\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.201284 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.415970 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr"] Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.515086 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" event={"ID":"b4649458-a012-402c-ba57-9dc9f05aca28","Type":"ContainerStarted","Data":"af4245cdcfaace044a48c17681a71f66aee64eb7482d55d0e7ade08a4f887a12"} Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.587209 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:35:09 crc kubenswrapper[4846]: E1201 00:35:09.587408 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.667525 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd"] Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.669052 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.678702 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd"] Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.749000 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.749054 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpkt\" (UniqueName: \"kubernetes.io/projected/339ff43e-b6aa-4a28-aaa6-447a330d6b75-kube-api-access-6jpkt\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.749077 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.850337 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.850592 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jpkt\" (UniqueName: \"kubernetes.io/projected/339ff43e-b6aa-4a28-aaa6-447a330d6b75-kube-api-access-6jpkt\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.850612 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.850924 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.851044 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:09 crc kubenswrapper[4846]: I1201 00:35:09.872634 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jpkt\" (UniqueName: \"kubernetes.io/projected/339ff43e-b6aa-4a28-aaa6-447a330d6b75-kube-api-access-6jpkt\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.023396 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.216389 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd"] Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.525387 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4649458-a012-402c-ba57-9dc9f05aca28" containerID="38fbd9c21db444ee36f041a88e4ae2ae8e69b9ade864c8b41e2ade21710c01ac" exitCode=0 Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.525471 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" event={"ID":"b4649458-a012-402c-ba57-9dc9f05aca28","Type":"ContainerDied","Data":"38fbd9c21db444ee36f041a88e4ae2ae8e69b9ade864c8b41e2ade21710c01ac"} Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.529645 4846 generic.go:334] "Generic (PLEG): container finished" podID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerID="9cccbd380a92d3f950f978f6bfaf0a0d91a1f07bd837e807f4636f05d43c4395" exitCode=0 Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.529830 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" event={"ID":"339ff43e-b6aa-4a28-aaa6-447a330d6b75","Type":"ContainerDied","Data":"9cccbd380a92d3f950f978f6bfaf0a0d91a1f07bd837e807f4636f05d43c4395"} Dec 01 00:35:10 crc kubenswrapper[4846]: I1201 00:35:10.529866 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" event={"ID":"339ff43e-b6aa-4a28-aaa6-447a330d6b75","Type":"ContainerStarted","Data":"7800a670af91b47033711face421ddc4245634bff4c07553b1b548de68d6bbc6"} Dec 01 00:35:11 crc kubenswrapper[4846]: I1201 00:35:11.541471 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4649458-a012-402c-ba57-9dc9f05aca28" containerID="c6c1ff62f14e093f99910e3c3173091bf980691ce8fb5edc578648a6b231db2e" exitCode=0 Dec 01 00:35:11 crc kubenswrapper[4846]: I1201 00:35:11.541519 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" event={"ID":"b4649458-a012-402c-ba57-9dc9f05aca28","Type":"ContainerDied","Data":"c6c1ff62f14e093f99910e3c3173091bf980691ce8fb5edc578648a6b231db2e"} Dec 01 00:35:11 crc kubenswrapper[4846]: I1201 00:35:11.545255 4846 generic.go:334] "Generic (PLEG): container finished" podID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerID="3fd572f15fc8292babb5551a03cc0201558ebfb6054eef862de91683e97cb6a3" exitCode=0 Dec 01 00:35:11 crc kubenswrapper[4846]: I1201 00:35:11.545290 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" event={"ID":"339ff43e-b6aa-4a28-aaa6-447a330d6b75","Type":"ContainerDied","Data":"3fd572f15fc8292babb5551a03cc0201558ebfb6054eef862de91683e97cb6a3"} Dec 01 00:35:11 crc kubenswrapper[4846]: E1201 00:35:11.912508 4846 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4649458_a012_402c_ba57_9dc9f05aca28.slice/crio-conmon-de600c63d29c940d7e384afc07a4c11c5dc6f2e309d2099c3df382068248763b.scope\": RecentStats: unable to find data in memory cache]" Dec 01 00:35:12 crc kubenswrapper[4846]: I1201 00:35:12.555259 4846 generic.go:334] "Generic (PLEG): container finished" podID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerID="8ba2b7f93f8909c50aa9579f3b9d71f5f9b4e4b8348b27ac49d70276eae37114" exitCode=0 Dec 01 00:35:12 crc kubenswrapper[4846]: I1201 00:35:12.555342 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" event={"ID":"339ff43e-b6aa-4a28-aaa6-447a330d6b75","Type":"ContainerDied","Data":"8ba2b7f93f8909c50aa9579f3b9d71f5f9b4e4b8348b27ac49d70276eae37114"} Dec 01 00:35:12 crc kubenswrapper[4846]: I1201 00:35:12.557662 4846 generic.go:334] "Generic (PLEG): container finished" podID="b4649458-a012-402c-ba57-9dc9f05aca28" containerID="de600c63d29c940d7e384afc07a4c11c5dc6f2e309d2099c3df382068248763b" exitCode=0 Dec 01 00:35:12 crc kubenswrapper[4846]: I1201 00:35:12.557749 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" event={"ID":"b4649458-a012-402c-ba57-9dc9f05aca28","Type":"ContainerDied","Data":"de600c63d29c940d7e384afc07a4c11c5dc6f2e309d2099c3df382068248763b"} Dec 01 00:35:13 crc kubenswrapper[4846]: I1201 00:35:13.847029 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:13 crc kubenswrapper[4846]: I1201 00:35:13.852788 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.018488 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-util\") pod \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.018586 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d79h\" (UniqueName: \"kubernetes.io/projected/b4649458-a012-402c-ba57-9dc9f05aca28-kube-api-access-9d79h\") pod \"b4649458-a012-402c-ba57-9dc9f05aca28\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.018748 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-bundle\") pod \"b4649458-a012-402c-ba57-9dc9f05aca28\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.018819 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-bundle\") pod \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.018881 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jpkt\" (UniqueName: \"kubernetes.io/projected/339ff43e-b6aa-4a28-aaa6-447a330d6b75-kube-api-access-6jpkt\") pod \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\" (UID: \"339ff43e-b6aa-4a28-aaa6-447a330d6b75\") " Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.018914 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-util\") pod \"b4649458-a012-402c-ba57-9dc9f05aca28\" (UID: \"b4649458-a012-402c-ba57-9dc9f05aca28\") " Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.019284 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-bundle" (OuterVolumeSpecName: "bundle") pod "b4649458-a012-402c-ba57-9dc9f05aca28" (UID: "b4649458-a012-402c-ba57-9dc9f05aca28"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.019368 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-bundle" (OuterVolumeSpecName: "bundle") pod "339ff43e-b6aa-4a28-aaa6-447a330d6b75" (UID: "339ff43e-b6aa-4a28-aaa6-447a330d6b75"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.024726 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339ff43e-b6aa-4a28-aaa6-447a330d6b75-kube-api-access-6jpkt" (OuterVolumeSpecName: "kube-api-access-6jpkt") pod "339ff43e-b6aa-4a28-aaa6-447a330d6b75" (UID: "339ff43e-b6aa-4a28-aaa6-447a330d6b75"). InnerVolumeSpecName "kube-api-access-6jpkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.024737 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4649458-a012-402c-ba57-9dc9f05aca28-kube-api-access-9d79h" (OuterVolumeSpecName: "kube-api-access-9d79h") pod "b4649458-a012-402c-ba57-9dc9f05aca28" (UID: "b4649458-a012-402c-ba57-9dc9f05aca28"). InnerVolumeSpecName "kube-api-access-9d79h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.036241 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-util" (OuterVolumeSpecName: "util") pod "339ff43e-b6aa-4a28-aaa6-447a330d6b75" (UID: "339ff43e-b6aa-4a28-aaa6-447a330d6b75"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.037827 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-util" (OuterVolumeSpecName: "util") pod "b4649458-a012-402c-ba57-9dc9f05aca28" (UID: "b4649458-a012-402c-ba57-9dc9f05aca28"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.120537 4846 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-util\") on node \"crc\" DevicePath \"\"" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.120596 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d79h\" (UniqueName: \"kubernetes.io/projected/b4649458-a012-402c-ba57-9dc9f05aca28-kube-api-access-9d79h\") on node \"crc\" DevicePath \"\"" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.120614 4846 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.120627 4846 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/339ff43e-b6aa-4a28-aaa6-447a330d6b75-bundle\") on node \"crc\" DevicePath \"\"" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.120639 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jpkt\" (UniqueName: \"kubernetes.io/projected/339ff43e-b6aa-4a28-aaa6-447a330d6b75-kube-api-access-6jpkt\") on node \"crc\" DevicePath \"\"" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.120655 4846 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4649458-a012-402c-ba57-9dc9f05aca28-util\") on node \"crc\" DevicePath \"\"" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.576223 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" event={"ID":"339ff43e-b6aa-4a28-aaa6-447a330d6b75","Type":"ContainerDied","Data":"7800a670af91b47033711face421ddc4245634bff4c07553b1b548de68d6bbc6"} Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.576543 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7800a670af91b47033711face421ddc4245634bff4c07553b1b548de68d6bbc6" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.576262 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65aklmrd" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.577984 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" event={"ID":"b4649458-a012-402c-ba57-9dc9f05aca28","Type":"ContainerDied","Data":"af4245cdcfaace044a48c17681a71f66aee64eb7482d55d0e7ade08a4f887a12"} Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.578009 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4245cdcfaace044a48c17681a71f66aee64eb7482d55d0e7ade08a4f887a12" Dec 01 00:35:14 crc kubenswrapper[4846]: I1201 00:35:14.578133 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c097nrtr" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897191 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7998f748fc-66l4k"] Dec 01 00:35:17 crc kubenswrapper[4846]: E1201 00:35:17.897497 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="pull" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897515 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="pull" Dec 01 00:35:17 crc kubenswrapper[4846]: E1201 00:35:17.897529 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="extract" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897537 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="extract" Dec 01 00:35:17 crc kubenswrapper[4846]: E1201 00:35:17.897550 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="extract" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897558 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="extract" Dec 01 00:35:17 crc kubenswrapper[4846]: E1201 00:35:17.897574 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="util" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897582 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="util" Dec 01 00:35:17 crc kubenswrapper[4846]: E1201 00:35:17.897595 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="util" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897602 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="util" Dec 01 00:35:17 crc kubenswrapper[4846]: E1201 00:35:17.897613 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="pull" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897622 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="pull" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897786 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4649458-a012-402c-ba57-9dc9f05aca28" containerName="extract" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.897798 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="339ff43e-b6aa-4a28-aaa6-447a330d6b75" containerName="extract" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.898342 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.901137 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-pmp5s" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.909353 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7998f748fc-66l4k"] Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.972638 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/5dcb0316-9aa0-4006-8850-3d4f820f988d-runner\") pod \"service-telemetry-operator-7998f748fc-66l4k\" (UID: \"5dcb0316-9aa0-4006-8850-3d4f820f988d\") " pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:17 crc kubenswrapper[4846]: I1201 00:35:17.973067 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dns2q\" (UniqueName: \"kubernetes.io/projected/5dcb0316-9aa0-4006-8850-3d4f820f988d-kube-api-access-dns2q\") pod \"service-telemetry-operator-7998f748fc-66l4k\" (UID: \"5dcb0316-9aa0-4006-8850-3d4f820f988d\") " pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:18 crc kubenswrapper[4846]: I1201 00:35:18.074404 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/5dcb0316-9aa0-4006-8850-3d4f820f988d-runner\") pod \"service-telemetry-operator-7998f748fc-66l4k\" (UID: \"5dcb0316-9aa0-4006-8850-3d4f820f988d\") " pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:18 crc kubenswrapper[4846]: I1201 00:35:18.074472 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dns2q\" (UniqueName: \"kubernetes.io/projected/5dcb0316-9aa0-4006-8850-3d4f820f988d-kube-api-access-dns2q\") pod \"service-telemetry-operator-7998f748fc-66l4k\" (UID: \"5dcb0316-9aa0-4006-8850-3d4f820f988d\") " pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:18 crc kubenswrapper[4846]: I1201 00:35:18.075206 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/5dcb0316-9aa0-4006-8850-3d4f820f988d-runner\") pod \"service-telemetry-operator-7998f748fc-66l4k\" (UID: \"5dcb0316-9aa0-4006-8850-3d4f820f988d\") " pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:18 crc kubenswrapper[4846]: I1201 00:35:18.096763 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dns2q\" (UniqueName: \"kubernetes.io/projected/5dcb0316-9aa0-4006-8850-3d4f820f988d-kube-api-access-dns2q\") pod \"service-telemetry-operator-7998f748fc-66l4k\" (UID: \"5dcb0316-9aa0-4006-8850-3d4f820f988d\") " pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:18 crc kubenswrapper[4846]: I1201 00:35:18.217631 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" Dec 01 00:35:19 crc kubenswrapper[4846]: I1201 00:35:19.338728 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7998f748fc-66l4k"] Dec 01 00:35:19 crc kubenswrapper[4846]: I1201 00:35:19.612640 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" event={"ID":"5dcb0316-9aa0-4006-8850-3d4f820f988d","Type":"ContainerStarted","Data":"478b5bde84e1d1020e31c0f6a6b741e8cc88833a8c618d4e1835067d93fc0450"} Dec 01 00:35:21 crc kubenswrapper[4846]: I1201 00:35:21.580657 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:35:21 crc kubenswrapper[4846]: E1201 00:35:21.581154 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.163136 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-95bf5898f-5jc24"] Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.164420 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.169790 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-8jg4q" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.182594 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-95bf5898f-5jc24"] Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.303500 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/df86ead6-ba68-4d18-8d2f-3e74d8b0328e-runner\") pod \"smart-gateway-operator-95bf5898f-5jc24\" (UID: \"df86ead6-ba68-4d18-8d2f-3e74d8b0328e\") " pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.303625 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfwbd\" (UniqueName: \"kubernetes.io/projected/df86ead6-ba68-4d18-8d2f-3e74d8b0328e-kube-api-access-cfwbd\") pod \"smart-gateway-operator-95bf5898f-5jc24\" (UID: \"df86ead6-ba68-4d18-8d2f-3e74d8b0328e\") " pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.404954 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfwbd\" (UniqueName: \"kubernetes.io/projected/df86ead6-ba68-4d18-8d2f-3e74d8b0328e-kube-api-access-cfwbd\") pod \"smart-gateway-operator-95bf5898f-5jc24\" (UID: \"df86ead6-ba68-4d18-8d2f-3e74d8b0328e\") " pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.405083 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/df86ead6-ba68-4d18-8d2f-3e74d8b0328e-runner\") pod \"smart-gateway-operator-95bf5898f-5jc24\" (UID: \"df86ead6-ba68-4d18-8d2f-3e74d8b0328e\") " pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.405787 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/df86ead6-ba68-4d18-8d2f-3e74d8b0328e-runner\") pod \"smart-gateway-operator-95bf5898f-5jc24\" (UID: \"df86ead6-ba68-4d18-8d2f-3e74d8b0328e\") " pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.446124 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfwbd\" (UniqueName: \"kubernetes.io/projected/df86ead6-ba68-4d18-8d2f-3e74d8b0328e-kube-api-access-cfwbd\") pod \"smart-gateway-operator-95bf5898f-5jc24\" (UID: \"df86ead6-ba68-4d18-8d2f-3e74d8b0328e\") " pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:26 crc kubenswrapper[4846]: I1201 00:35:26.492216 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" Dec 01 00:35:35 crc kubenswrapper[4846]: I1201 00:35:35.485106 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-95bf5898f-5jc24"] Dec 01 00:35:36 crc kubenswrapper[4846]: I1201 00:35:36.580409 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:35:36 crc kubenswrapper[4846]: E1201 00:35:36.581139 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:35:38 crc kubenswrapper[4846]: E1201 00:35:38.448314 4846 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/service-telemetry-operator:latest" Dec 01 00:35:38 crc kubenswrapper[4846]: E1201 00:35:38.448851 4846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1764549176,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dns2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-7998f748fc-66l4k_service-telemetry(5dcb0316-9aa0-4006-8850-3d4f820f988d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 01 00:35:38 crc kubenswrapper[4846]: E1201 00:35:38.450089 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" podUID="5dcb0316-9aa0-4006-8850-3d4f820f988d" Dec 01 00:35:38 crc kubenswrapper[4846]: I1201 00:35:38.750481 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" event={"ID":"df86ead6-ba68-4d18-8d2f-3e74d8b0328e","Type":"ContainerStarted","Data":"f5947b7083afabc7d6c1c21761530a232bd369c6a61de12244da4e06e81f4c40"} Dec 01 00:35:38 crc kubenswrapper[4846]: E1201 00:35:38.752709 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:latest\\\"\"" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" podUID="5dcb0316-9aa0-4006-8850-3d4f820f988d" Dec 01 00:35:42 crc kubenswrapper[4846]: I1201 00:35:42.794813 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" event={"ID":"df86ead6-ba68-4d18-8d2f-3e74d8b0328e","Type":"ContainerStarted","Data":"f522b8acb99a2414232425d31d846075ee6d4823dc0fb399bef184e917a1cda7"} Dec 01 00:35:42 crc kubenswrapper[4846]: I1201 00:35:42.817291 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-95bf5898f-5jc24" podStartSLOduration=12.247070036 podStartE2EDuration="16.817273521s" podCreationTimestamp="2025-12-01 00:35:26 +0000 UTC" firstStartedPulling="2025-12-01 00:35:37.992845377 +0000 UTC m=+1758.773614451" lastFinishedPulling="2025-12-01 00:35:42.563048862 +0000 UTC m=+1763.343817936" observedRunningTime="2025-12-01 00:35:42.81661037 +0000 UTC m=+1763.597379444" watchObservedRunningTime="2025-12-01 00:35:42.817273521 +0000 UTC m=+1763.598042585" Dec 01 00:35:49 crc kubenswrapper[4846]: I1201 00:35:49.586919 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:35:49 crc kubenswrapper[4846]: E1201 00:35:49.587801 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:35:52 crc kubenswrapper[4846]: I1201 00:35:52.863054 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" event={"ID":"5dcb0316-9aa0-4006-8850-3d4f820f988d","Type":"ContainerStarted","Data":"2cad419ba2e302407fe90651c504d8c19f1ed57503a8e4dd22f2111e93e4102e"} Dec 01 00:35:52 crc kubenswrapper[4846]: I1201 00:35:52.897417 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7998f748fc-66l4k" podStartSLOduration=2.92583913 podStartE2EDuration="35.897390283s" podCreationTimestamp="2025-12-01 00:35:17 +0000 UTC" firstStartedPulling="2025-12-01 00:35:19.356989503 +0000 UTC m=+1740.137758577" lastFinishedPulling="2025-12-01 00:35:52.328540656 +0000 UTC m=+1773.109309730" observedRunningTime="2025-12-01 00:35:52.883278414 +0000 UTC m=+1773.664047488" watchObservedRunningTime="2025-12-01 00:35:52.897390283 +0000 UTC m=+1773.678159357" Dec 01 00:36:02 crc kubenswrapper[4846]: I1201 00:36:02.580171 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:36:02 crc kubenswrapper[4846]: E1201 00:36:02.582432 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:36:14 crc kubenswrapper[4846]: I1201 00:36:14.580329 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:36:14 crc kubenswrapper[4846]: E1201 00:36:14.581026 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.148288 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p5vl8"] Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.149253 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.151404 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.151931 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.152163 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-w77zl" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.152179 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.152479 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.152636 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.152944 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.155462 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p5vl8"] Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286098 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-config\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286156 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286211 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67pz9\" (UniqueName: \"kubernetes.io/projected/d78d3304-bbdb-4c59-a156-dfba027c6305-kube-api-access-67pz9\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286274 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-users\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286335 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286377 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.286418 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387265 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67pz9\" (UniqueName: \"kubernetes.io/projected/d78d3304-bbdb-4c59-a156-dfba027c6305-kube-api-access-67pz9\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387335 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-users\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387395 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387434 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387491 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387548 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-config\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.387571 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.388716 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-config\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.394337 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.394602 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-users\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.394987 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.396462 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.408811 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.433896 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67pz9\" (UniqueName: \"kubernetes.io/projected/d78d3304-bbdb-4c59-a156-dfba027c6305-kube-api-access-67pz9\") pod \"default-interconnect-68864d46cb-p5vl8\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.470118 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.903136 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p5vl8"] Dec 01 00:36:17 crc kubenswrapper[4846]: I1201 00:36:17.911960 4846 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 00:36:18 crc kubenswrapper[4846]: I1201 00:36:18.071033 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" event={"ID":"d78d3304-bbdb-4c59-a156-dfba027c6305","Type":"ContainerStarted","Data":"9e9cc0792150f87de9a13a9a3a074063c154c240113e13d0900949347fd052ed"} Dec 01 00:36:23 crc kubenswrapper[4846]: I1201 00:36:23.111722 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" event={"ID":"d78d3304-bbdb-4c59-a156-dfba027c6305","Type":"ContainerStarted","Data":"fa3690359c0722a92dbef0200c78fd140578133bc3dc1579ca080d79ab90c9f7"} Dec 01 00:36:23 crc kubenswrapper[4846]: I1201 00:36:23.143089 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" podStartSLOduration=1.409395118 podStartE2EDuration="6.143068309s" podCreationTimestamp="2025-12-01 00:36:17 +0000 UTC" firstStartedPulling="2025-12-01 00:36:17.911769507 +0000 UTC m=+1798.692538581" lastFinishedPulling="2025-12-01 00:36:22.645442698 +0000 UTC m=+1803.426211772" observedRunningTime="2025-12-01 00:36:23.138861748 +0000 UTC m=+1803.919630862" watchObservedRunningTime="2025-12-01 00:36:23.143068309 +0000 UTC m=+1803.923837373" Dec 01 00:36:25 crc kubenswrapper[4846]: I1201 00:36:25.580591 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:36:25 crc kubenswrapper[4846]: E1201 00:36:25.581924 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.636978 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.638359 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.640469 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-qtn4j" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.641515 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.641530 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.641543 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.641543 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.641880 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.642566 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.642873 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.683175 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740746 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwh7\" (UniqueName: \"kubernetes.io/projected/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-kube-api-access-mcwh7\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740801 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-tls-assets\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740851 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-web-config\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740886 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740912 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740935 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740959 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-config\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.740975 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-config-out\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.741379 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.741488 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-12775335-47b3-4737-bbe3-b6d268af4e22\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12775335-47b3-4737-bbe3-b6d268af4e22\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.843395 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-web-config\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.844177 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.844479 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.845511 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.846259 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-config\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: E1201 00:36:27.844417 4846 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 01 00:36:27 crc kubenswrapper[4846]: E1201 00:36:27.846545 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls podName:d2e6f159-62bc-42e9-94b7-5d33a34a2bbe nodeName:}" failed. No retries permitted until 2025-12-01 00:36:28.346513719 +0000 UTC m=+1809.127282863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "d2e6f159-62bc-42e9-94b7-5d33a34a2bbe") : secret "default-prometheus-proxy-tls" not found Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.846440 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-config-out\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.845350 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.846628 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.846819 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-12775335-47b3-4737-bbe3-b6d268af4e22\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12775335-47b3-4737-bbe3-b6d268af4e22\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.846964 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-tls-assets\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.846995 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwh7\" (UniqueName: \"kubernetes.io/projected/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-kube-api-access-mcwh7\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.848099 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.850649 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-config\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.851203 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.852311 4846 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.852383 4846 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-12775335-47b3-4737-bbe3-b6d268af4e22\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12775335-47b3-4737-bbe3-b6d268af4e22\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/640a97e071c1b9b68b42a25f8bd1309cf42fe7841a1f2fe0d0a47e24226f3614/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.853204 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-web-config\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.853978 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-config-out\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.861424 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-tls-assets\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.871144 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwh7\" (UniqueName: \"kubernetes.io/projected/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-kube-api-access-mcwh7\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:27 crc kubenswrapper[4846]: I1201 00:36:27.881037 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-12775335-47b3-4737-bbe3-b6d268af4e22\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-12775335-47b3-4737-bbe3-b6d268af4e22\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:28 crc kubenswrapper[4846]: I1201 00:36:28.354815 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:28 crc kubenswrapper[4846]: E1201 00:36:28.355056 4846 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 01 00:36:28 crc kubenswrapper[4846]: E1201 00:36:28.355202 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls podName:d2e6f159-62bc-42e9-94b7-5d33a34a2bbe nodeName:}" failed. No retries permitted until 2025-12-01 00:36:29.355169202 +0000 UTC m=+1810.135938316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "d2e6f159-62bc-42e9-94b7-5d33a34a2bbe") : secret "default-prometheus-proxy-tls" not found Dec 01 00:36:29 crc kubenswrapper[4846]: I1201 00:36:29.370074 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:29 crc kubenswrapper[4846]: I1201 00:36:29.378112 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d2e6f159-62bc-42e9-94b7-5d33a34a2bbe-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe\") " pod="service-telemetry/prometheus-default-0" Dec 01 00:36:29 crc kubenswrapper[4846]: I1201 00:36:29.454329 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 01 00:36:29 crc kubenswrapper[4846]: I1201 00:36:29.730908 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 01 00:36:30 crc kubenswrapper[4846]: I1201 00:36:30.161750 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe","Type":"ContainerStarted","Data":"753109af94d8be1abe9ac821491b4d9f8062212527a4a03c5a2c0cc4936f36a1"} Dec 01 00:36:34 crc kubenswrapper[4846]: I1201 00:36:34.205753 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe","Type":"ContainerStarted","Data":"16bcdbbb1f8ce6dd842e9b19bb0700d9f29ceee6a617598efce69b1b1cbe622c"} Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.061026 4846 scope.go:117] "RemoveContainer" containerID="82531ccc593b911f729b728597612146674f2ac124c8f2152305c7f391a49505" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.106672 4846 scope.go:117] "RemoveContainer" containerID="1f894d8e499fc9d110382ee816954895420a02c97facbd24bfa3cd35b26fda14" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.192560 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-k55wh"] Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.193553 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.205804 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-k55wh"] Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.294191 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckjp9\" (UniqueName: \"kubernetes.io/projected/fd576764-efbc-4385-ab49-e960a95d095d-kube-api-access-ckjp9\") pod \"default-snmp-webhook-6856cfb745-k55wh\" (UID: \"fd576764-efbc-4385-ab49-e960a95d095d\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.395382 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckjp9\" (UniqueName: \"kubernetes.io/projected/fd576764-efbc-4385-ab49-e960a95d095d-kube-api-access-ckjp9\") pod \"default-snmp-webhook-6856cfb745-k55wh\" (UID: \"fd576764-efbc-4385-ab49-e960a95d095d\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.423755 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckjp9\" (UniqueName: \"kubernetes.io/projected/fd576764-efbc-4385-ab49-e960a95d095d-kube-api-access-ckjp9\") pod \"default-snmp-webhook-6856cfb745-k55wh\" (UID: \"fd576764-efbc-4385-ab49-e960a95d095d\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.517247 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" Dec 01 00:36:38 crc kubenswrapper[4846]: I1201 00:36:38.770624 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-k55wh"] Dec 01 00:36:39 crc kubenswrapper[4846]: I1201 00:36:39.239725 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" event={"ID":"fd576764-efbc-4385-ab49-e960a95d095d","Type":"ContainerStarted","Data":"c682f1981edd916b90693db7d8fc5e0f8fd637367c349517f499355883d07597"} Dec 01 00:36:39 crc kubenswrapper[4846]: I1201 00:36:39.586746 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:36:39 crc kubenswrapper[4846]: E1201 00:36:39.587025 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.578408 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.581019 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.584071 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-4f42v" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.584248 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.584390 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.584504 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.584597 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.584723 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.598723 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.753832 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d27e84f-ca2b-42fc-bbbd-2132be113b65-config-out\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.753914 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.753956 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.753985 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbn8\" (UniqueName: \"kubernetes.io/projected/9d27e84f-ca2b-42fc-bbbd-2132be113b65-kube-api-access-kgbn8\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.754008 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d27e84f-ca2b-42fc-bbbd-2132be113b65-tls-assets\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.754032 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.754609 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-config-volume\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.754785 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.754822 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-web-config\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855519 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855577 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855598 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbn8\" (UniqueName: \"kubernetes.io/projected/9d27e84f-ca2b-42fc-bbbd-2132be113b65-kube-api-access-kgbn8\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855615 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d27e84f-ca2b-42fc-bbbd-2132be113b65-tls-assets\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855637 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855676 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-config-volume\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855732 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855755 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-web-config\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.855793 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d27e84f-ca2b-42fc-bbbd-2132be113b65-config-out\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: E1201 00:36:41.855887 4846 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 01 00:36:41 crc kubenswrapper[4846]: E1201 00:36:41.856003 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls podName:9d27e84f-ca2b-42fc-bbbd-2132be113b65 nodeName:}" failed. No retries permitted until 2025-12-01 00:36:42.355970119 +0000 UTC m=+1823.136739243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9d27e84f-ca2b-42fc-bbbd-2132be113b65") : secret "default-alertmanager-proxy-tls" not found Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.860892 4846 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.861231 4846 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d5fdc1ff3335999ecf9725a81d4c860f72ded74c65086afe3e4c36adb04e012a/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.862009 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-web-config\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.862236 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d27e84f-ca2b-42fc-bbbd-2132be113b65-tls-assets\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.862330 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d27e84f-ca2b-42fc-bbbd-2132be113b65-config-out\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.863009 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.865205 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.866600 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-config-volume\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.891635 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbn8\" (UniqueName: \"kubernetes.io/projected/9d27e84f-ca2b-42fc-bbbd-2132be113b65-kube-api-access-kgbn8\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:41 crc kubenswrapper[4846]: I1201 00:36:41.893282 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-903ac425-e7aa-468e-b090-3e10b6b56a94\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:42 crc kubenswrapper[4846]: I1201 00:36:42.370629 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:42 crc kubenswrapper[4846]: E1201 00:36:42.370810 4846 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 01 00:36:42 crc kubenswrapper[4846]: E1201 00:36:42.370878 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls podName:9d27e84f-ca2b-42fc-bbbd-2132be113b65 nodeName:}" failed. No retries permitted until 2025-12-01 00:36:43.370859017 +0000 UTC m=+1824.151628091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9d27e84f-ca2b-42fc-bbbd-2132be113b65") : secret "default-alertmanager-proxy-tls" not found Dec 01 00:36:43 crc kubenswrapper[4846]: I1201 00:36:43.280220 4846 generic.go:334] "Generic (PLEG): container finished" podID="d2e6f159-62bc-42e9-94b7-5d33a34a2bbe" containerID="16bcdbbb1f8ce6dd842e9b19bb0700d9f29ceee6a617598efce69b1b1cbe622c" exitCode=0 Dec 01 00:36:43 crc kubenswrapper[4846]: I1201 00:36:43.280326 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe","Type":"ContainerDied","Data":"16bcdbbb1f8ce6dd842e9b19bb0700d9f29ceee6a617598efce69b1b1cbe622c"} Dec 01 00:36:43 crc kubenswrapper[4846]: I1201 00:36:43.383979 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:43 crc kubenswrapper[4846]: E1201 00:36:43.384613 4846 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 01 00:36:43 crc kubenswrapper[4846]: E1201 00:36:43.384732 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls podName:9d27e84f-ca2b-42fc-bbbd-2132be113b65 nodeName:}" failed. No retries permitted until 2025-12-01 00:36:45.384709677 +0000 UTC m=+1826.165478831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "9d27e84f-ca2b-42fc-bbbd-2132be113b65") : secret "default-alertmanager-proxy-tls" not found Dec 01 00:36:45 crc kubenswrapper[4846]: I1201 00:36:45.458027 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:45 crc kubenswrapper[4846]: I1201 00:36:45.473377 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d27e84f-ca2b-42fc-bbbd-2132be113b65-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"9d27e84f-ca2b-42fc-bbbd-2132be113b65\") " pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:45 crc kubenswrapper[4846]: I1201 00:36:45.510895 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 01 00:36:46 crc kubenswrapper[4846]: I1201 00:36:46.001006 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 01 00:36:46 crc kubenswrapper[4846]: W1201 00:36:46.232378 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d27e84f_ca2b_42fc_bbbd_2132be113b65.slice/crio-06cb77d2617535bb81271b034d87387e99f9f92bf01fd176fa0416f3d5b25ddd WatchSource:0}: Error finding container 06cb77d2617535bb81271b034d87387e99f9f92bf01fd176fa0416f3d5b25ddd: Status 404 returned error can't find the container with id 06cb77d2617535bb81271b034d87387e99f9f92bf01fd176fa0416f3d5b25ddd Dec 01 00:36:46 crc kubenswrapper[4846]: I1201 00:36:46.302974 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9d27e84f-ca2b-42fc-bbbd-2132be113b65","Type":"ContainerStarted","Data":"06cb77d2617535bb81271b034d87387e99f9f92bf01fd176fa0416f3d5b25ddd"} Dec 01 00:36:46 crc kubenswrapper[4846]: I1201 00:36:46.304065 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" event={"ID":"fd576764-efbc-4385-ab49-e960a95d095d","Type":"ContainerStarted","Data":"de0b91efb092898640294701080c69969e027e45727f20f30eb2744d3b99e01a"} Dec 01 00:36:46 crc kubenswrapper[4846]: I1201 00:36:46.325773 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-k55wh" podStartSLOduration=1.511527188 podStartE2EDuration="8.325750271s" podCreationTimestamp="2025-12-01 00:36:38 +0000 UTC" firstStartedPulling="2025-12-01 00:36:38.781742194 +0000 UTC m=+1819.562511308" lastFinishedPulling="2025-12-01 00:36:45.595965317 +0000 UTC m=+1826.376734391" observedRunningTime="2025-12-01 00:36:46.324555423 +0000 UTC m=+1827.105324497" watchObservedRunningTime="2025-12-01 00:36:46.325750271 +0000 UTC m=+1827.106519355" Dec 01 00:36:49 crc kubenswrapper[4846]: I1201 00:36:49.327410 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9d27e84f-ca2b-42fc-bbbd-2132be113b65","Type":"ContainerStarted","Data":"bb01dbdedf0e67fc14de8ff889a72e8fc700a73d06ffb1c1d0fc484adea861a1"} Dec 01 00:36:52 crc kubenswrapper[4846]: I1201 00:36:52.350581 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe","Type":"ContainerStarted","Data":"027c1e651111e74d9e995c42e6ac4e68b9ccec69c57ade8995cecc4e3c8c34c0"} Dec 01 00:36:53 crc kubenswrapper[4846]: I1201 00:36:53.580430 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:36:53 crc kubenswrapper[4846]: E1201 00:36:53.580925 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:36:55 crc kubenswrapper[4846]: I1201 00:36:55.373262 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe","Type":"ContainerStarted","Data":"29bf56e0818e8cc1f7a032ecfd9b4e3cf24ba55d46f10b816c38a63221a997f4"} Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.937212 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x"] Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.938576 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.941044 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.941197 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-m7fmr" Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.941389 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.941535 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 01 00:36:56 crc kubenswrapper[4846]: I1201 00:36:56.976749 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x"] Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.046236 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.046299 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwpsn\" (UniqueName: \"kubernetes.io/projected/e6efa439-ca46-4c99-be4d-60f9b510683b-kube-api-access-bwpsn\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.046361 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e6efa439-ca46-4c99-be4d-60f9b510683b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.046401 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.046439 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e6efa439-ca46-4c99-be4d-60f9b510683b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.147942 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwpsn\" (UniqueName: \"kubernetes.io/projected/e6efa439-ca46-4c99-be4d-60f9b510683b-kube-api-access-bwpsn\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.148025 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e6efa439-ca46-4c99-be4d-60f9b510683b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.148072 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.148139 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e6efa439-ca46-4c99-be4d-60f9b510683b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.148185 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: E1201 00:36:57.148346 4846 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 01 00:36:57 crc kubenswrapper[4846]: E1201 00:36:57.148408 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls podName:e6efa439-ca46-4c99-be4d-60f9b510683b nodeName:}" failed. No retries permitted until 2025-12-01 00:36:57.648387623 +0000 UTC m=+1838.429156697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" (UID: "e6efa439-ca46-4c99-be4d-60f9b510683b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.149259 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e6efa439-ca46-4c99-be4d-60f9b510683b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.150272 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e6efa439-ca46-4c99-be4d-60f9b510683b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.154466 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.171216 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwpsn\" (UniqueName: \"kubernetes.io/projected/e6efa439-ca46-4c99-be4d-60f9b510683b-kube-api-access-bwpsn\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.389541 4846 generic.go:334] "Generic (PLEG): container finished" podID="9d27e84f-ca2b-42fc-bbbd-2132be113b65" containerID="bb01dbdedf0e67fc14de8ff889a72e8fc700a73d06ffb1c1d0fc484adea861a1" exitCode=0 Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.389613 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9d27e84f-ca2b-42fc-bbbd-2132be113b65","Type":"ContainerDied","Data":"bb01dbdedf0e67fc14de8ff889a72e8fc700a73d06ffb1c1d0fc484adea861a1"} Dec 01 00:36:57 crc kubenswrapper[4846]: I1201 00:36:57.656904 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:57 crc kubenswrapper[4846]: E1201 00:36:57.659344 4846 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 01 00:36:57 crc kubenswrapper[4846]: E1201 00:36:57.659404 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls podName:e6efa439-ca46-4c99-be4d-60f9b510683b nodeName:}" failed. No retries permitted until 2025-12-01 00:36:58.659387219 +0000 UTC m=+1839.440156303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" (UID: "e6efa439-ca46-4c99-be4d-60f9b510683b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 01 00:36:58 crc kubenswrapper[4846]: I1201 00:36:58.676710 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:58 crc kubenswrapper[4846]: I1201 00:36:58.683225 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6efa439-ca46-4c99-be4d-60f9b510683b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x\" (UID: \"e6efa439-ca46-4c99-be4d-60f9b510683b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:58 crc kubenswrapper[4846]: I1201 00:36:58.759144 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.657632 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2"] Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.659895 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.662342 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.662565 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.675957 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2"] Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.793255 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngmlx\" (UniqueName: \"kubernetes.io/projected/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-kube-api-access-ngmlx\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.793301 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.793534 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.793640 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.793737 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.895302 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.895372 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.895410 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.895452 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngmlx\" (UniqueName: \"kubernetes.io/projected/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-kube-api-access-ngmlx\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.895481 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: E1201 00:36:59.895635 4846 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 01 00:36:59 crc kubenswrapper[4846]: E1201 00:36:59.895781 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls podName:1fce5bc7-b9ff-465c-88ee-ccb105c9f364 nodeName:}" failed. No retries permitted until 2025-12-01 00:37:00.395756921 +0000 UTC m=+1841.176525995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" (UID: "1fce5bc7-b9ff-465c-88ee-ccb105c9f364") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.896001 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.896827 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.901350 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:36:59 crc kubenswrapper[4846]: I1201 00:36:59.912908 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngmlx\" (UniqueName: \"kubernetes.io/projected/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-kube-api-access-ngmlx\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:37:00 crc kubenswrapper[4846]: I1201 00:37:00.402732 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:37:00 crc kubenswrapper[4846]: E1201 00:37:00.402870 4846 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 01 00:37:00 crc kubenswrapper[4846]: E1201 00:37:00.402931 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls podName:1fce5bc7-b9ff-465c-88ee-ccb105c9f364 nodeName:}" failed. No retries permitted until 2025-12-01 00:37:01.402913848 +0000 UTC m=+1842.183682922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" (UID: "1fce5bc7-b9ff-465c-88ee-ccb105c9f364") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.416102 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.425710 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fce5bc7-b9ff-465c-88ee-ccb105c9f364-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2\" (UID: \"1fce5bc7-b9ff-465c-88ee-ccb105c9f364\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.455653 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d2e6f159-62bc-42e9-94b7-5d33a34a2bbe","Type":"ContainerStarted","Data":"5938606fb91035a267c6d1fdb60760fc08f18aca8986e93440f81003710b9ffb"} Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.480883 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.484407 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.04302943 podStartE2EDuration="35.484397092s" podCreationTimestamp="2025-12-01 00:36:26 +0000 UTC" firstStartedPulling="2025-12-01 00:36:29.734800471 +0000 UTC m=+1810.515569545" lastFinishedPulling="2025-12-01 00:37:01.176168133 +0000 UTC m=+1841.956937207" observedRunningTime="2025-12-01 00:37:01.482376109 +0000 UTC m=+1842.263145183" watchObservedRunningTime="2025-12-01 00:37:01.484397092 +0000 UTC m=+1842.265166166" Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.510769 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x"] Dec 01 00:37:01 crc kubenswrapper[4846]: I1201 00:37:01.927308 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2"] Dec 01 00:37:02 crc kubenswrapper[4846]: W1201 00:37:02.397370 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fce5bc7_b9ff_465c_88ee_ccb105c9f364.slice/crio-559c11056ec7cfc5467963054e145f6f6064c9a3de6c8d805ec64c17e4c0e73b WatchSource:0}: Error finding container 559c11056ec7cfc5467963054e145f6f6064c9a3de6c8d805ec64c17e4c0e73b: Status 404 returned error can't find the container with id 559c11056ec7cfc5467963054e145f6f6064c9a3de6c8d805ec64c17e4c0e73b Dec 01 00:37:02 crc kubenswrapper[4846]: I1201 00:37:02.469491 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerStarted","Data":"de07dff0e908ba5424c563170ccf939e9b9d534ba2beb9c9354e3ca984d2fe12"} Dec 01 00:37:02 crc kubenswrapper[4846]: I1201 00:37:02.470273 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerStarted","Data":"559c11056ec7cfc5467963054e145f6f6064c9a3de6c8d805ec64c17e4c0e73b"} Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.480505 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9d27e84f-ca2b-42fc-bbbd-2132be113b65","Type":"ContainerStarted","Data":"81a8176e9325a9b3dab5c6d866dcc1656d07a9203d629fa3d1a4b07577deef49"} Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.482660 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerStarted","Data":"86a740601318e537720fa07f4f255cb9377c001465534aa21674c875e87c9e00"} Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.648641 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w"] Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.649970 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.653233 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.653577 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.660349 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w"] Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.748387 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.748461 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.748493 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxt8\" (UniqueName: \"kubernetes.io/projected/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-kube-api-access-9cxt8\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.748525 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.748556 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.850770 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.850872 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.850905 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cxt8\" (UniqueName: \"kubernetes.io/projected/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-kube-api-access-9cxt8\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.850948 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.851016 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.851543 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: E1201 00:37:03.851551 4846 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.852752 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: E1201 00:37:03.853648 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls podName:97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85 nodeName:}" failed. No retries permitted until 2025-12-01 00:37:04.353599326 +0000 UTC m=+1845.134368400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" (UID: "97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.861505 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:03 crc kubenswrapper[4846]: I1201 00:37:03.868662 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cxt8\" (UniqueName: \"kubernetes.io/projected/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-kube-api-access-9cxt8\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:04 crc kubenswrapper[4846]: I1201 00:37:04.360134 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:04 crc kubenswrapper[4846]: E1201 00:37:04.360337 4846 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 01 00:37:04 crc kubenswrapper[4846]: E1201 00:37:04.360602 4846 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls podName:97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85 nodeName:}" failed. No retries permitted until 2025-12-01 00:37:05.360577438 +0000 UTC m=+1846.141346512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" (UID: "97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 01 00:37:04 crc kubenswrapper[4846]: I1201 00:37:04.454999 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 01 00:37:04 crc kubenswrapper[4846]: I1201 00:37:04.495110 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerStarted","Data":"d294521926d81546f29a7524d54ecf2b1cbb15c4718cea26a414ec86dd977666"} Dec 01 00:37:05 crc kubenswrapper[4846]: I1201 00:37:05.374820 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:05 crc kubenswrapper[4846]: I1201 00:37:05.394292 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w\" (UID: \"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:05 crc kubenswrapper[4846]: I1201 00:37:05.482567 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" Dec 01 00:37:05 crc kubenswrapper[4846]: I1201 00:37:05.506928 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9d27e84f-ca2b-42fc-bbbd-2132be113b65","Type":"ContainerStarted","Data":"d33729e84fc9287099d1c805d42e18e94e45bd93007316faf1f4413960142591"} Dec 01 00:37:05 crc kubenswrapper[4846]: I1201 00:37:05.584319 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:37:05 crc kubenswrapper[4846]: E1201 00:37:05.584525 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:37:09 crc kubenswrapper[4846]: I1201 00:37:09.111779 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w"] Dec 01 00:37:09 crc kubenswrapper[4846]: W1201 00:37:09.115411 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97fd4e8d_e3bb_4a6c_b9c9_9448fab5dc85.slice/crio-f88226416292560cde2d5924545e886b26ba98c7a835db8f5b7e7babe70ed99e WatchSource:0}: Error finding container f88226416292560cde2d5924545e886b26ba98c7a835db8f5b7e7babe70ed99e: Status 404 returned error can't find the container with id f88226416292560cde2d5924545e886b26ba98c7a835db8f5b7e7babe70ed99e Dec 01 00:37:09 crc kubenswrapper[4846]: I1201 00:37:09.549443 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerStarted","Data":"f88226416292560cde2d5924545e886b26ba98c7a835db8f5b7e7babe70ed99e"} Dec 01 00:37:09 crc kubenswrapper[4846]: I1201 00:37:09.554614 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"9d27e84f-ca2b-42fc-bbbd-2132be113b65","Type":"ContainerStarted","Data":"cabd7202343f8844a80305a43c1f71905f9c9a310711bb4bcd1071652d0aa319"} Dec 01 00:37:09 crc kubenswrapper[4846]: I1201 00:37:09.559392 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerStarted","Data":"06e40a73c4b76683bdb5e5fe53a21c65f11d50c45f0aa4302dd0fdfe21b86127"} Dec 01 00:37:09 crc kubenswrapper[4846]: I1201 00:37:09.563137 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerStarted","Data":"7f661657af7027c2253ccd28b08b683b26b46c2d3888c50e34cba6aaf81ccf52"} Dec 01 00:37:09 crc kubenswrapper[4846]: I1201 00:37:09.600669 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=18.226991467 podStartE2EDuration="29.600571139s" podCreationTimestamp="2025-12-01 00:36:40 +0000 UTC" firstStartedPulling="2025-12-01 00:36:57.392317281 +0000 UTC m=+1838.173086355" lastFinishedPulling="2025-12-01 00:37:08.765896953 +0000 UTC m=+1849.546666027" observedRunningTime="2025-12-01 00:37:09.58647334 +0000 UTC m=+1850.367242414" watchObservedRunningTime="2025-12-01 00:37:09.600571139 +0000 UTC m=+1850.381340233" Dec 01 00:37:10 crc kubenswrapper[4846]: I1201 00:37:10.576317 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerStarted","Data":"900628c2467a7e9058821bf373b699b773dcb4946a99694e303c557e09e83f21"} Dec 01 00:37:10 crc kubenswrapper[4846]: I1201 00:37:10.576581 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerStarted","Data":"860d14e9a1803a03b37075d318c05292f2ba589828fd2a681ecaa18dfc3c63e0"} Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.374232 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65"] Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.377589 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.380854 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.381093 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.389834 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65"] Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.478609 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a7aaf165-c4f0-4a91-9682-d92f647deec8-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.478702 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7aaf165-c4f0-4a91-9682-d92f647deec8-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.478765 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w47dq\" (UniqueName: \"kubernetes.io/projected/a7aaf165-c4f0-4a91-9682-d92f647deec8-kube-api-access-w47dq\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.478912 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a7aaf165-c4f0-4a91-9682-d92f647deec8-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.579802 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w47dq\" (UniqueName: \"kubernetes.io/projected/a7aaf165-c4f0-4a91-9682-d92f647deec8-kube-api-access-w47dq\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.579894 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a7aaf165-c4f0-4a91-9682-d92f647deec8-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.579943 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a7aaf165-c4f0-4a91-9682-d92f647deec8-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.579980 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7aaf165-c4f0-4a91-9682-d92f647deec8-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.580530 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7aaf165-c4f0-4a91-9682-d92f647deec8-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.581183 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a7aaf165-c4f0-4a91-9682-d92f647deec8-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.598147 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w47dq\" (UniqueName: \"kubernetes.io/projected/a7aaf165-c4f0-4a91-9682-d92f647deec8-kube-api-access-w47dq\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.605324 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a7aaf165-c4f0-4a91-9682-d92f647deec8-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-fb6795fb7-crs65\" (UID: \"a7aaf165-c4f0-4a91-9682-d92f647deec8\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:11 crc kubenswrapper[4846]: I1201 00:37:11.698761 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.108038 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd"] Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.109911 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.113905 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.127625 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd"] Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.196356 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96lf\" (UniqueName: \"kubernetes.io/projected/7a969c8d-e631-466c-b189-b798d0b03173-kube-api-access-b96lf\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.196672 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a969c8d-e631-466c-b189-b798d0b03173-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.196746 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a969c8d-e631-466c-b189-b798d0b03173-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.196813 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a969c8d-e631-466c-b189-b798d0b03173-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.298613 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a969c8d-e631-466c-b189-b798d0b03173-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.298727 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a969c8d-e631-466c-b189-b798d0b03173-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.298755 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96lf\" (UniqueName: \"kubernetes.io/projected/7a969c8d-e631-466c-b189-b798d0b03173-kube-api-access-b96lf\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.298770 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a969c8d-e631-466c-b189-b798d0b03173-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.299899 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a969c8d-e631-466c-b189-b798d0b03173-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.300037 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7a969c8d-e631-466c-b189-b798d0b03173-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.312558 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7a969c8d-e631-466c-b189-b798d0b03173-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.314962 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96lf\" (UniqueName: \"kubernetes.io/projected/7a969c8d-e631-466c-b189-b798d0b03173-kube-api-access-b96lf\") pod \"default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd\" (UID: \"7a969c8d-e631-466c-b189-b798d0b03173\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:12 crc kubenswrapper[4846]: I1201 00:37:12.428459 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.248288 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd"] Dec 01 00:37:14 crc kubenswrapper[4846]: W1201 00:37:14.251410 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a969c8d_e631_466c_b189_b798d0b03173.slice/crio-28fc72f2a46150158560ef86802cf8b1f045d8fdac6f37d9cec39d8d27b4da93 WatchSource:0}: Error finding container 28fc72f2a46150158560ef86802cf8b1f045d8fdac6f37d9cec39d8d27b4da93: Status 404 returned error can't find the container with id 28fc72f2a46150158560ef86802cf8b1f045d8fdac6f37d9cec39d8d27b4da93 Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.300562 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65"] Dec 01 00:37:14 crc kubenswrapper[4846]: W1201 00:37:14.303472 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aaf165_c4f0_4a91_9682_d92f647deec8.slice/crio-30ee3416c8222b5c41595b734a68de0207881556eea25c74c15af2488df619a3 WatchSource:0}: Error finding container 30ee3416c8222b5c41595b734a68de0207881556eea25c74c15af2488df619a3: Status 404 returned error can't find the container with id 30ee3416c8222b5c41595b734a68de0207881556eea25c74c15af2488df619a3 Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.455194 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.514058 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.624838 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerStarted","Data":"317dd8ac505bf09f11d22767dbf5daa976cc086572586fb49e4b423ed8b09968"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.624892 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerStarted","Data":"28fc72f2a46150158560ef86802cf8b1f045d8fdac6f37d9cec39d8d27b4da93"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.631849 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerStarted","Data":"52f12aa06d2d2da8a531c8173d9f03827717fb21baa14be72622cbe05d42fefb"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.636122 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerStarted","Data":"b0db48abe4da31b9120bf23e48d851b3a2d31792eca67474b6933fe99097d8ca"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.636152 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerStarted","Data":"30ee3416c8222b5c41595b734a68de0207881556eea25c74c15af2488df619a3"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.638257 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerStarted","Data":"b6df1599afa7262f745c6aa03184ec9d075655657311226bbeed374722fad6af"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.640187 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerStarted","Data":"f3f147522ef5b0ed854888cdec88b42681ede665712d2aad78b4a223e7766a7f"} Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.659999 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" podStartSLOduration=6.922395671 podStartE2EDuration="11.659980863s" podCreationTimestamp="2025-12-01 00:37:03 +0000 UTC" firstStartedPulling="2025-12-01 00:37:09.118340477 +0000 UTC m=+1849.899109551" lastFinishedPulling="2025-12-01 00:37:13.855925669 +0000 UTC m=+1854.636694743" observedRunningTime="2025-12-01 00:37:14.658662081 +0000 UTC m=+1855.439431155" watchObservedRunningTime="2025-12-01 00:37:14.659980863 +0000 UTC m=+1855.440749937" Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.697957 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.713191 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" podStartSLOduration=4.379459716 podStartE2EDuration="15.713170937s" podCreationTimestamp="2025-12-01 00:36:59 +0000 UTC" firstStartedPulling="2025-12-01 00:37:02.513482476 +0000 UTC m=+1843.294251550" lastFinishedPulling="2025-12-01 00:37:13.847193697 +0000 UTC m=+1854.627962771" observedRunningTime="2025-12-01 00:37:14.701813084 +0000 UTC m=+1855.482582168" watchObservedRunningTime="2025-12-01 00:37:14.713170937 +0000 UTC m=+1855.493940011" Dec 01 00:37:14 crc kubenswrapper[4846]: I1201 00:37:14.743884 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" podStartSLOduration=6.363059067 podStartE2EDuration="18.743862682s" podCreationTimestamp="2025-12-01 00:36:56 +0000 UTC" firstStartedPulling="2025-12-01 00:37:01.525602604 +0000 UTC m=+1842.306371678" lastFinishedPulling="2025-12-01 00:37:13.906406219 +0000 UTC m=+1854.687175293" observedRunningTime="2025-12-01 00:37:14.740130726 +0000 UTC m=+1855.520899800" watchObservedRunningTime="2025-12-01 00:37:14.743862682 +0000 UTC m=+1855.524631756" Dec 01 00:37:15 crc kubenswrapper[4846]: I1201 00:37:15.649718 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerStarted","Data":"fdd51da29a3aee86ad0f3b971bc2cb8ec63e7145689623ceb938137aff1b7de8"} Dec 01 00:37:15 crc kubenswrapper[4846]: I1201 00:37:15.652810 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerStarted","Data":"d42ecbe2674b4bb4f5d4ab389281afd5469087c711beaa23668307921017da25"} Dec 01 00:37:15 crc kubenswrapper[4846]: I1201 00:37:15.687206 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" podStartSLOduration=3.364839019 podStartE2EDuration="3.687171297s" podCreationTimestamp="2025-12-01 00:37:12 +0000 UTC" firstStartedPulling="2025-12-01 00:37:14.255592032 +0000 UTC m=+1855.036361106" lastFinishedPulling="2025-12-01 00:37:14.57792431 +0000 UTC m=+1855.358693384" observedRunningTime="2025-12-01 00:37:15.670124927 +0000 UTC m=+1856.450894001" watchObservedRunningTime="2025-12-01 00:37:15.687171297 +0000 UTC m=+1856.467940371" Dec 01 00:37:15 crc kubenswrapper[4846]: I1201 00:37:15.691118 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" podStartSLOduration=4.417265541 podStartE2EDuration="4.69110269s" podCreationTimestamp="2025-12-01 00:37:11 +0000 UTC" firstStartedPulling="2025-12-01 00:37:14.306413153 +0000 UTC m=+1855.087182227" lastFinishedPulling="2025-12-01 00:37:14.580250302 +0000 UTC m=+1855.361019376" observedRunningTime="2025-12-01 00:37:15.687970513 +0000 UTC m=+1856.468739597" watchObservedRunningTime="2025-12-01 00:37:15.69110269 +0000 UTC m=+1856.471871764" Dec 01 00:37:20 crc kubenswrapper[4846]: I1201 00:37:20.580624 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:37:20 crc kubenswrapper[4846]: E1201 00:37:20.581352 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.391325 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p5vl8"] Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.391957 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" podUID="d78d3304-bbdb-4c59-a156-dfba027c6305" containerName="default-interconnect" containerID="cri-o://fa3690359c0722a92dbef0200c78fd140578133bc3dc1579ca080d79ab90c9f7" gracePeriod=30 Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.725402 4846 generic.go:334] "Generic (PLEG): container finished" podID="a7aaf165-c4f0-4a91-9682-d92f647deec8" containerID="b0db48abe4da31b9120bf23e48d851b3a2d31792eca67474b6933fe99097d8ca" exitCode=0 Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.725838 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerDied","Data":"b0db48abe4da31b9120bf23e48d851b3a2d31792eca67474b6933fe99097d8ca"} Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.726318 4846 scope.go:117] "RemoveContainer" containerID="b0db48abe4da31b9120bf23e48d851b3a2d31792eca67474b6933fe99097d8ca" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.732510 4846 generic.go:334] "Generic (PLEG): container finished" podID="7a969c8d-e631-466c-b189-b798d0b03173" containerID="317dd8ac505bf09f11d22767dbf5daa976cc086572586fb49e4b423ed8b09968" exitCode=0 Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.732615 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerDied","Data":"317dd8ac505bf09f11d22767dbf5daa976cc086572586fb49e4b423ed8b09968"} Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.733468 4846 scope.go:117] "RemoveContainer" containerID="317dd8ac505bf09f11d22767dbf5daa976cc086572586fb49e4b423ed8b09968" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.737195 4846 generic.go:334] "Generic (PLEG): container finished" podID="97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85" containerID="900628c2467a7e9058821bf373b699b773dcb4946a99694e303c557e09e83f21" exitCode=0 Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.737260 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerDied","Data":"900628c2467a7e9058821bf373b699b773dcb4946a99694e303c557e09e83f21"} Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.738071 4846 scope.go:117] "RemoveContainer" containerID="900628c2467a7e9058821bf373b699b773dcb4946a99694e303c557e09e83f21" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.741263 4846 generic.go:334] "Generic (PLEG): container finished" podID="d78d3304-bbdb-4c59-a156-dfba027c6305" containerID="fa3690359c0722a92dbef0200c78fd140578133bc3dc1579ca080d79ab90c9f7" exitCode=0 Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.741316 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" event={"ID":"d78d3304-bbdb-4c59-a156-dfba027c6305","Type":"ContainerDied","Data":"fa3690359c0722a92dbef0200c78fd140578133bc3dc1579ca080d79ab90c9f7"} Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.833574 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.980479 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-ca\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.980850 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-credentials\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.981066 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-credentials\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.981851 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-config\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.981935 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-users\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.981966 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67pz9\" (UniqueName: \"kubernetes.io/projected/d78d3304-bbdb-4c59-a156-dfba027c6305-kube-api-access-67pz9\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.982005 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-ca\") pod \"d78d3304-bbdb-4c59-a156-dfba027c6305\" (UID: \"d78d3304-bbdb-4c59-a156-dfba027c6305\") " Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.982522 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.987795 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.987949 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.988241 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78d3304-bbdb-4c59-a156-dfba027c6305-kube-api-access-67pz9" (OuterVolumeSpecName: "kube-api-access-67pz9") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "kube-api-access-67pz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:37:26 crc kubenswrapper[4846]: I1201 00:37:26.988590 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.006487 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.007802 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "d78d3304-bbdb-4c59-a156-dfba027c6305" (UID: "d78d3304-bbdb-4c59-a156-dfba027c6305"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083811 4846 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083866 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67pz9\" (UniqueName: \"kubernetes.io/projected/d78d3304-bbdb-4c59-a156-dfba027c6305-kube-api-access-67pz9\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083890 4846 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083908 4846 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083925 4846 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083948 4846 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d78d3304-bbdb-4c59-a156-dfba027c6305-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.083968 4846 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d78d3304-bbdb-4c59-a156-dfba027c6305-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.751073 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerStarted","Data":"959add98396a7f2d0cc6a535318161d5b63500933783b35d93662ed7ee1a6b05"} Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.752727 4846 generic.go:334] "Generic (PLEG): container finished" podID="e6efa439-ca46-4c99-be4d-60f9b510683b" containerID="06e40a73c4b76683bdb5e5fe53a21c65f11d50c45f0aa4302dd0fdfe21b86127" exitCode=0 Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.752807 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerDied","Data":"06e40a73c4b76683bdb5e5fe53a21c65f11d50c45f0aa4302dd0fdfe21b86127"} Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.753576 4846 scope.go:117] "RemoveContainer" containerID="06e40a73c4b76683bdb5e5fe53a21c65f11d50c45f0aa4302dd0fdfe21b86127" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.756671 4846 generic.go:334] "Generic (PLEG): container finished" podID="1fce5bc7-b9ff-465c-88ee-ccb105c9f364" containerID="7f661657af7027c2253ccd28b08b683b26b46c2d3888c50e34cba6aaf81ccf52" exitCode=0 Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.756776 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerDied","Data":"7f661657af7027c2253ccd28b08b683b26b46c2d3888c50e34cba6aaf81ccf52"} Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.757585 4846 scope.go:117] "RemoveContainer" containerID="7f661657af7027c2253ccd28b08b683b26b46c2d3888c50e34cba6aaf81ccf52" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.760141 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerStarted","Data":"8590c1f41c6db85eb526424f81b7f70b092b90280f1727f56044a9f42bc8eff0"} Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.764079 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerStarted","Data":"ad51be717a05886b149b8f99bc9e23aaee2c1d1ea8b4b2a88d1f73c776ad0113"} Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.767424 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" event={"ID":"d78d3304-bbdb-4c59-a156-dfba027c6305","Type":"ContainerDied","Data":"9e9cc0792150f87de9a13a9a3a074063c154c240113e13d0900949347fd052ed"} Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.767479 4846 scope.go:117] "RemoveContainer" containerID="fa3690359c0722a92dbef0200c78fd140578133bc3dc1579ca080d79ab90c9f7" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.767624 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-p5vl8" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.831105 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gt9q4"] Dec 01 00:37:27 crc kubenswrapper[4846]: E1201 00:37:27.831539 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78d3304-bbdb-4c59-a156-dfba027c6305" containerName="default-interconnect" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.831558 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78d3304-bbdb-4c59-a156-dfba027c6305" containerName="default-interconnect" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.831823 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78d3304-bbdb-4c59-a156-dfba027c6305" containerName="default-interconnect" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.832436 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.834980 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.838916 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.839180 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-w77zl" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.839571 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.839624 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.840073 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.840642 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.855541 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gt9q4"] Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.961768 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p5vl8"] Dec 01 00:37:27 crc kubenswrapper[4846]: I1201 00:37:27.967814 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-p5vl8"] Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013060 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013146 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-sasl-config\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013187 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013225 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013272 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013316 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-sasl-users\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.013377 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxkv\" (UniqueName: \"kubernetes.io/projected/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-kube-api-access-5sxkv\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121254 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-sasl-config\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121301 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121327 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121361 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121389 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-sasl-users\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121430 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sxkv\" (UniqueName: \"kubernetes.io/projected/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-kube-api-access-5sxkv\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.121458 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.124811 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-sasl-config\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.133024 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.130694 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-sasl-users\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.134750 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.136711 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.139077 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.144732 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sxkv\" (UniqueName: \"kubernetes.io/projected/5a36c0b9-4959-4f1b-a917-3e67a45c1fb4-kube-api-access-5sxkv\") pod \"default-interconnect-68864d46cb-gt9q4\" (UID: \"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4\") " pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.160047 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.600608 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-gt9q4"] Dec 01 00:37:28 crc kubenswrapper[4846]: W1201 00:37:28.607740 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a36c0b9_4959_4f1b_a917_3e67a45c1fb4.slice/crio-a7a2a4862e9514d4814f9ea437fe00ee39f5eeeb53cb8ec6020ec6c41bdc083d WatchSource:0}: Error finding container a7a2a4862e9514d4814f9ea437fe00ee39f5eeeb53cb8ec6020ec6c41bdc083d: Status 404 returned error can't find the container with id a7a2a4862e9514d4814f9ea437fe00ee39f5eeeb53cb8ec6020ec6c41bdc083d Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.776761 4846 generic.go:334] "Generic (PLEG): container finished" podID="a7aaf165-c4f0-4a91-9682-d92f647deec8" containerID="959add98396a7f2d0cc6a535318161d5b63500933783b35d93662ed7ee1a6b05" exitCode=0 Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.776984 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerDied","Data":"959add98396a7f2d0cc6a535318161d5b63500933783b35d93662ed7ee1a6b05"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.778272 4846 scope.go:117] "RemoveContainer" containerID="b0db48abe4da31b9120bf23e48d851b3a2d31792eca67474b6933fe99097d8ca" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.778856 4846 scope.go:117] "RemoveContainer" containerID="959add98396a7f2d0cc6a535318161d5b63500933783b35d93662ed7ee1a6b05" Dec 01 00:37:28 crc kubenswrapper[4846]: E1201 00:37:28.779155 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-fb6795fb7-crs65_service-telemetry(a7aaf165-c4f0-4a91-9682-d92f647deec8)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" podUID="a7aaf165-c4f0-4a91-9682-d92f647deec8" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.795664 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerStarted","Data":"cc60c54577ea365dc51bb83dc0e7f11ad7b84ebeac0d159fda8abdc0ddeb07ed"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.798446 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerStarted","Data":"468a151cd4eef35400e761fee7e8baea0667ad257a31ae42bc68cee0e1d43145"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.801169 4846 generic.go:334] "Generic (PLEG): container finished" podID="7a969c8d-e631-466c-b189-b798d0b03173" containerID="8590c1f41c6db85eb526424f81b7f70b092b90280f1727f56044a9f42bc8eff0" exitCode=0 Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.801230 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerDied","Data":"8590c1f41c6db85eb526424f81b7f70b092b90280f1727f56044a9f42bc8eff0"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.801551 4846 scope.go:117] "RemoveContainer" containerID="8590c1f41c6db85eb526424f81b7f70b092b90280f1727f56044a9f42bc8eff0" Dec 01 00:37:28 crc kubenswrapper[4846]: E1201 00:37:28.801801 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd_service-telemetry(7a969c8d-e631-466c-b189-b798d0b03173)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" podUID="7a969c8d-e631-466c-b189-b798d0b03173" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.813350 4846 generic.go:334] "Generic (PLEG): container finished" podID="97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85" containerID="ad51be717a05886b149b8f99bc9e23aaee2c1d1ea8b4b2a88d1f73c776ad0113" exitCode=0 Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.813417 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerDied","Data":"ad51be717a05886b149b8f99bc9e23aaee2c1d1ea8b4b2a88d1f73c776ad0113"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.814014 4846 scope.go:117] "RemoveContainer" containerID="ad51be717a05886b149b8f99bc9e23aaee2c1d1ea8b4b2a88d1f73c776ad0113" Dec 01 00:37:28 crc kubenswrapper[4846]: E1201 00:37:28.814268 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w_service-telemetry(97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" podUID="97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.825503 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" event={"ID":"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4","Type":"ContainerStarted","Data":"f3ac5e515c0110b95be6ce3abd6ae7d15025bc56980659ab5ece10d870715d20"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.825547 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" event={"ID":"5a36c0b9-4959-4f1b-a917-3e67a45c1fb4","Type":"ContainerStarted","Data":"a7a2a4862e9514d4814f9ea437fe00ee39f5eeeb53cb8ec6020ec6c41bdc083d"} Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.831010 4846 scope.go:117] "RemoveContainer" containerID="317dd8ac505bf09f11d22767dbf5daa976cc086572586fb49e4b423ed8b09968" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.915026 4846 scope.go:117] "RemoveContainer" containerID="900628c2467a7e9058821bf373b699b773dcb4946a99694e303c557e09e83f21" Dec 01 00:37:28 crc kubenswrapper[4846]: I1201 00:37:28.918261 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-gt9q4" podStartSLOduration=2.918235163 podStartE2EDuration="2.918235163s" podCreationTimestamp="2025-12-01 00:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-01 00:37:28.915435676 +0000 UTC m=+1869.696204740" watchObservedRunningTime="2025-12-01 00:37:28.918235163 +0000 UTC m=+1869.699004237" Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.590606 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78d3304-bbdb-4c59-a156-dfba027c6305" path="/var/lib/kubelet/pods/d78d3304-bbdb-4c59-a156-dfba027c6305/volumes" Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.839838 4846 generic.go:334] "Generic (PLEG): container finished" podID="e6efa439-ca46-4c99-be4d-60f9b510683b" containerID="cc60c54577ea365dc51bb83dc0e7f11ad7b84ebeac0d159fda8abdc0ddeb07ed" exitCode=0 Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.839934 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerDied","Data":"cc60c54577ea365dc51bb83dc0e7f11ad7b84ebeac0d159fda8abdc0ddeb07ed"} Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.840013 4846 scope.go:117] "RemoveContainer" containerID="06e40a73c4b76683bdb5e5fe53a21c65f11d50c45f0aa4302dd0fdfe21b86127" Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.840625 4846 scope.go:117] "RemoveContainer" containerID="cc60c54577ea365dc51bb83dc0e7f11ad7b84ebeac0d159fda8abdc0ddeb07ed" Dec 01 00:37:29 crc kubenswrapper[4846]: E1201 00:37:29.840845 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x_service-telemetry(e6efa439-ca46-4c99-be4d-60f9b510683b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" podUID="e6efa439-ca46-4c99-be4d-60f9b510683b" Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.844867 4846 generic.go:334] "Generic (PLEG): container finished" podID="1fce5bc7-b9ff-465c-88ee-ccb105c9f364" containerID="468a151cd4eef35400e761fee7e8baea0667ad257a31ae42bc68cee0e1d43145" exitCode=0 Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.844930 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerDied","Data":"468a151cd4eef35400e761fee7e8baea0667ad257a31ae42bc68cee0e1d43145"} Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.845530 4846 scope.go:117] "RemoveContainer" containerID="468a151cd4eef35400e761fee7e8baea0667ad257a31ae42bc68cee0e1d43145" Dec 01 00:37:29 crc kubenswrapper[4846]: E1201 00:37:29.845818 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2_service-telemetry(1fce5bc7-b9ff-465c-88ee-ccb105c9f364)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" podUID="1fce5bc7-b9ff-465c-88ee-ccb105c9f364" Dec 01 00:37:29 crc kubenswrapper[4846]: I1201 00:37:29.895848 4846 scope.go:117] "RemoveContainer" containerID="7f661657af7027c2253ccd28b08b683b26b46c2d3888c50e34cba6aaf81ccf52" Dec 01 00:37:30 crc kubenswrapper[4846]: I1201 00:37:30.971547 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 01 00:37:30 crc kubenswrapper[4846]: I1201 00:37:30.972852 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 01 00:37:30 crc kubenswrapper[4846]: I1201 00:37:30.978371 4846 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 01 00:37:30 crc kubenswrapper[4846]: I1201 00:37:30.978481 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 01 00:37:30 crc kubenswrapper[4846]: I1201 00:37:30.994005 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.015212 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6fn\" (UniqueName: \"kubernetes.io/projected/7ddd3132-7244-46c0-b832-f2c83d35b4c6-kube-api-access-gj6fn\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.015348 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/7ddd3132-7244-46c0-b832-f2c83d35b4c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.015378 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/7ddd3132-7244-46c0-b832-f2c83d35b4c6-qdr-test-config\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.116149 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/7ddd3132-7244-46c0-b832-f2c83d35b4c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.116438 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/7ddd3132-7244-46c0-b832-f2c83d35b4c6-qdr-test-config\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.116594 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj6fn\" (UniqueName: \"kubernetes.io/projected/7ddd3132-7244-46c0-b832-f2c83d35b4c6-kube-api-access-gj6fn\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.117813 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/7ddd3132-7244-46c0-b832-f2c83d35b4c6-qdr-test-config\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.123638 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/7ddd3132-7244-46c0-b832-f2c83d35b4c6-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.136319 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj6fn\" (UniqueName: \"kubernetes.io/projected/7ddd3132-7244-46c0-b832-f2c83d35b4c6-kube-api-access-gj6fn\") pod \"qdr-test\" (UID: \"7ddd3132-7244-46c0-b832-f2c83d35b4c6\") " pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.298766 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.734486 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 01 00:37:31 crc kubenswrapper[4846]: W1201 00:37:31.748823 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ddd3132_7244_46c0_b832_f2c83d35b4c6.slice/crio-f77c479e69dd92abe23e4753145d5b877f8aa924c9b32e49086620d18e459662 WatchSource:0}: Error finding container f77c479e69dd92abe23e4753145d5b877f8aa924c9b32e49086620d18e459662: Status 404 returned error can't find the container with id f77c479e69dd92abe23e4753145d5b877f8aa924c9b32e49086620d18e459662 Dec 01 00:37:31 crc kubenswrapper[4846]: I1201 00:37:31.875198 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"7ddd3132-7244-46c0-b832-f2c83d35b4c6","Type":"ContainerStarted","Data":"f77c479e69dd92abe23e4753145d5b877f8aa924c9b32e49086620d18e459662"} Dec 01 00:37:33 crc kubenswrapper[4846]: I1201 00:37:33.580714 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:37:33 crc kubenswrapper[4846]: I1201 00:37:33.938213 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"981733e1be260f0441574d3fe79b38aacc624e168f3bcd248d3314e930604c25"} Dec 01 00:37:40 crc kubenswrapper[4846]: I1201 00:37:40.580620 4846 scope.go:117] "RemoveContainer" containerID="ad51be717a05886b149b8f99bc9e23aaee2c1d1ea8b4b2a88d1f73c776ad0113" Dec 01 00:37:40 crc kubenswrapper[4846]: I1201 00:37:40.583218 4846 scope.go:117] "RemoveContainer" containerID="468a151cd4eef35400e761fee7e8baea0667ad257a31ae42bc68cee0e1d43145" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.013165 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w" event={"ID":"97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85","Type":"ContainerStarted","Data":"14c414d90a5318a35af8edd82fe5ecaf53ccae9f7bf1244d04d5eebc64bd2a7d"} Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.017694 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2" event={"ID":"1fce5bc7-b9ff-465c-88ee-ccb105c9f364","Type":"ContainerStarted","Data":"8599ef9e33c5c9dc7c528d2dc9eeb54a6bbb3dc2be32f5a6525a94280fb5791c"} Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.025933 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"7ddd3132-7244-46c0-b832-f2c83d35b4c6","Type":"ContainerStarted","Data":"e0d95fade338a3594fe49ec0569d83e37eefa932df1012fd4b93014713dbff2f"} Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.055102 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.965456848 podStartE2EDuration="12.055081348s" podCreationTimestamp="2025-12-01 00:37:30 +0000 UTC" firstStartedPulling="2025-12-01 00:37:31.750467651 +0000 UTC m=+1872.531236725" lastFinishedPulling="2025-12-01 00:37:40.840092151 +0000 UTC m=+1881.620861225" observedRunningTime="2025-12-01 00:37:42.04934273 +0000 UTC m=+1882.830111824" watchObservedRunningTime="2025-12-01 00:37:42.055081348 +0000 UTC m=+1882.835850422" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.348189 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-g2vz8"] Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.350037 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.351587 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.353051 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.353105 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.353159 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.353486 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.356311 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.369765 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-g2vz8"] Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.375418 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-sensubility-config\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.375823 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.375959 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.376273 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-config\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.376479 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbqq\" (UniqueName: \"kubernetes.io/projected/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-kube-api-access-4lbqq\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.376619 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-healthcheck-log\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.376940 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478220 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-healthcheck-log\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478328 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478396 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-sensubility-config\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478427 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478492 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478582 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-config\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.478656 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lbqq\" (UniqueName: \"kubernetes.io/projected/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-kube-api-access-4lbqq\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.479327 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-healthcheck-log\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.479940 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.480276 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.480407 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-config\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.480535 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-sensubility-config\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.480592 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.502106 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lbqq\" (UniqueName: \"kubernetes.io/projected/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-kube-api-access-4lbqq\") pod \"stf-smoketest-smoke1-g2vz8\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.665665 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.724870 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.725644 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.741917 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.782485 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57sp\" (UniqueName: \"kubernetes.io/projected/d0ec338c-f753-48cd-b5c9-92bec6d6b2e8-kube-api-access-v57sp\") pod \"curl\" (UID: \"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8\") " pod="service-telemetry/curl" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.883021 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57sp\" (UniqueName: \"kubernetes.io/projected/d0ec338c-f753-48cd-b5c9-92bec6d6b2e8-kube-api-access-v57sp\") pod \"curl\" (UID: \"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8\") " pod="service-telemetry/curl" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.913602 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57sp\" (UniqueName: \"kubernetes.io/projected/d0ec338c-f753-48cd-b5c9-92bec6d6b2e8-kube-api-access-v57sp\") pod \"curl\" (UID: \"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8\") " pod="service-telemetry/curl" Dec 01 00:37:42 crc kubenswrapper[4846]: I1201 00:37:42.960428 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-g2vz8"] Dec 01 00:37:43 crc kubenswrapper[4846]: I1201 00:37:43.035374 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" event={"ID":"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b","Type":"ContainerStarted","Data":"bf83aac2426d2798709445b1bed2d9d6d254da36b9a6df6a8dd0235147c50979"} Dec 01 00:37:43 crc kubenswrapper[4846]: I1201 00:37:43.091316 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 01 00:37:43 crc kubenswrapper[4846]: I1201 00:37:43.337230 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 01 00:37:43 crc kubenswrapper[4846]: I1201 00:37:43.579962 4846 scope.go:117] "RemoveContainer" containerID="959add98396a7f2d0cc6a535318161d5b63500933783b35d93662ed7ee1a6b05" Dec 01 00:37:43 crc kubenswrapper[4846]: I1201 00:37:43.580282 4846 scope.go:117] "RemoveContainer" containerID="8590c1f41c6db85eb526424f81b7f70b092b90280f1727f56044a9f42bc8eff0" Dec 01 00:37:44 crc kubenswrapper[4846]: I1201 00:37:44.049793 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8","Type":"ContainerStarted","Data":"0b25608df6e54b9e028819d51eb886fdfa9dc792238fe00cc145ad61caca2639"} Dec 01 00:37:45 crc kubenswrapper[4846]: I1201 00:37:45.066057 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd" event={"ID":"7a969c8d-e631-466c-b189-b798d0b03173","Type":"ContainerStarted","Data":"d442a2bc4fdd21810028fedbef64449b65bf84078a5fb60a83b2184bc74f14d1"} Dec 01 00:37:45 crc kubenswrapper[4846]: I1201 00:37:45.071007 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-fb6795fb7-crs65" event={"ID":"a7aaf165-c4f0-4a91-9682-d92f647deec8","Type":"ContainerStarted","Data":"327d120a8e6403297beed97591672f1cbc0b04509592c63d3bafbee9956b48f7"} Dec 01 00:37:45 crc kubenswrapper[4846]: I1201 00:37:45.581533 4846 scope.go:117] "RemoveContainer" containerID="cc60c54577ea365dc51bb83dc0e7f11ad7b84ebeac0d159fda8abdc0ddeb07ed" Dec 01 00:37:56 crc kubenswrapper[4846]: I1201 00:37:56.153296 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8","Type":"ContainerStarted","Data":"ba3abdbba0e400a2a888745165c4044d422fc95a62c84501c17ae931a3136ca0"} Dec 01 00:37:56 crc kubenswrapper[4846]: I1201 00:37:56.155261 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" event={"ID":"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b","Type":"ContainerStarted","Data":"5a16239371e4e384921245f43dd0906c3414140766fbb6911cc5233155b14fce"} Dec 01 00:37:56 crc kubenswrapper[4846]: I1201 00:37:56.159609 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x" event={"ID":"e6efa439-ca46-4c99-be4d-60f9b510683b","Type":"ContainerStarted","Data":"8f01593a6014aa4d0d5645c5a59525882dcc1c79c67b6750458b02b1a418564f"} Dec 01 00:37:56 crc kubenswrapper[4846]: I1201 00:37:56.174017 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/curl" podStartSLOduration=1.74234576 podStartE2EDuration="14.173998134s" podCreationTimestamp="2025-12-01 00:37:42 +0000 UTC" firstStartedPulling="2025-12-01 00:37:43.346189538 +0000 UTC m=+1884.126958612" lastFinishedPulling="2025-12-01 00:37:55.777841922 +0000 UTC m=+1896.558610986" observedRunningTime="2025-12-01 00:37:56.173459027 +0000 UTC m=+1896.954228111" watchObservedRunningTime="2025-12-01 00:37:56.173998134 +0000 UTC m=+1896.954767208" Dec 01 00:37:57 crc kubenswrapper[4846]: I1201 00:37:57.182488 4846 generic.go:334] "Generic (PLEG): container finished" podID="d0ec338c-f753-48cd-b5c9-92bec6d6b2e8" containerID="ba3abdbba0e400a2a888745165c4044d422fc95a62c84501c17ae931a3136ca0" exitCode=0 Dec 01 00:37:57 crc kubenswrapper[4846]: I1201 00:37:57.182543 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8","Type":"ContainerDied","Data":"ba3abdbba0e400a2a888745165c4044d422fc95a62c84501c17ae931a3136ca0"} Dec 01 00:38:03 crc kubenswrapper[4846]: I1201 00:38:03.448421 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 01 00:38:03 crc kubenswrapper[4846]: I1201 00:38:03.573811 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v57sp\" (UniqueName: \"kubernetes.io/projected/d0ec338c-f753-48cd-b5c9-92bec6d6b2e8-kube-api-access-v57sp\") pod \"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8\" (UID: \"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8\") " Dec 01 00:38:03 crc kubenswrapper[4846]: I1201 00:38:03.577990 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ec338c-f753-48cd-b5c9-92bec6d6b2e8-kube-api-access-v57sp" (OuterVolumeSpecName: "kube-api-access-v57sp") pod "d0ec338c-f753-48cd-b5c9-92bec6d6b2e8" (UID: "d0ec338c-f753-48cd-b5c9-92bec6d6b2e8"). InnerVolumeSpecName "kube-api-access-v57sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:38:03 crc kubenswrapper[4846]: I1201 00:38:03.618092 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_d0ec338c-f753-48cd-b5c9-92bec6d6b2e8/curl/0.log" Dec 01 00:38:03 crc kubenswrapper[4846]: I1201 00:38:03.675581 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v57sp\" (UniqueName: \"kubernetes.io/projected/d0ec338c-f753-48cd-b5c9-92bec6d6b2e8-kube-api-access-v57sp\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:03 crc kubenswrapper[4846]: I1201 00:38:03.871220 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-k55wh_fd576764-efbc-4385-ab49-e960a95d095d/prometheus-webhook-snmp/0.log" Dec 01 00:38:04 crc kubenswrapper[4846]: I1201 00:38:04.241354 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"d0ec338c-f753-48cd-b5c9-92bec6d6b2e8","Type":"ContainerDied","Data":"0b25608df6e54b9e028819d51eb886fdfa9dc792238fe00cc145ad61caca2639"} Dec 01 00:38:04 crc kubenswrapper[4846]: I1201 00:38:04.241408 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b25608df6e54b9e028819d51eb886fdfa9dc792238fe00cc145ad61caca2639" Dec 01 00:38:04 crc kubenswrapper[4846]: I1201 00:38:04.241485 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 01 00:38:04 crc kubenswrapper[4846]: I1201 00:38:04.245795 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" event={"ID":"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b","Type":"ContainerStarted","Data":"4599b62393728d09bdbb87abd999f35d23e4947b62f7d4226592bbca11d2b695"} Dec 01 00:38:04 crc kubenswrapper[4846]: I1201 00:38:04.273078 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" podStartSLOduration=1.7759826699999999 podStartE2EDuration="22.273056489s" podCreationTimestamp="2025-12-01 00:37:42 +0000 UTC" firstStartedPulling="2025-12-01 00:37:42.970405162 +0000 UTC m=+1883.751174236" lastFinishedPulling="2025-12-01 00:38:03.467478971 +0000 UTC m=+1904.248248055" observedRunningTime="2025-12-01 00:38:04.270502579 +0000 UTC m=+1905.051271653" watchObservedRunningTime="2025-12-01 00:38:04.273056489 +0000 UTC m=+1905.053825563" Dec 01 00:38:30 crc kubenswrapper[4846]: I1201 00:38:30.314321 4846 generic.go:334] "Generic (PLEG): container finished" podID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerID="5a16239371e4e384921245f43dd0906c3414140766fbb6911cc5233155b14fce" exitCode=0 Dec 01 00:38:30 crc kubenswrapper[4846]: I1201 00:38:30.314454 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" event={"ID":"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b","Type":"ContainerDied","Data":"5a16239371e4e384921245f43dd0906c3414140766fbb6911cc5233155b14fce"} Dec 01 00:38:30 crc kubenswrapper[4846]: I1201 00:38:30.315755 4846 scope.go:117] "RemoveContainer" containerID="5a16239371e4e384921245f43dd0906c3414140766fbb6911cc5233155b14fce" Dec 01 00:38:34 crc kubenswrapper[4846]: I1201 00:38:34.014350 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-k55wh_fd576764-efbc-4385-ab49-e960a95d095d/prometheus-webhook-snmp/0.log" Dec 01 00:38:35 crc kubenswrapper[4846]: I1201 00:38:35.358063 4846 generic.go:334] "Generic (PLEG): container finished" podID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerID="4599b62393728d09bdbb87abd999f35d23e4947b62f7d4226592bbca11d2b695" exitCode=0 Dec 01 00:38:35 crc kubenswrapper[4846]: I1201 00:38:35.358139 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" event={"ID":"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b","Type":"ContainerDied","Data":"4599b62393728d09bdbb87abd999f35d23e4947b62f7d4226592bbca11d2b695"} Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.720364 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.787747 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-publisher\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.788037 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lbqq\" (UniqueName: \"kubernetes.io/projected/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-kube-api-access-4lbqq\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.788123 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-config\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.788300 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-sensubility-config\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.788382 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-entrypoint-script\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.788458 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-entrypoint-script\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.788537 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-healthcheck-log\") pod \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\" (UID: \"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b\") " Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.801904 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-kube-api-access-4lbqq" (OuterVolumeSpecName: "kube-api-access-4lbqq") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "kube-api-access-4lbqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.812961 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.825165 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.827389 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.831073 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.832232 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.854288 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" (UID: "622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890040 4846 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890078 4846 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890095 4846 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890110 4846 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890125 4846 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890138 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lbqq\" (UniqueName: \"kubernetes.io/projected/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-kube-api-access-4lbqq\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:36 crc kubenswrapper[4846]: I1201 00:38:36.890151 4846 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 01 00:38:37 crc kubenswrapper[4846]: I1201 00:38:37.381129 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" event={"ID":"622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b","Type":"ContainerDied","Data":"bf83aac2426d2798709445b1bed2d9d6d254da36b9a6df6a8dd0235147c50979"} Dec 01 00:38:37 crc kubenswrapper[4846]: I1201 00:38:37.381200 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf83aac2426d2798709445b1bed2d9d6d254da36b9a6df6a8dd0235147c50979" Dec 01 00:38:37 crc kubenswrapper[4846]: I1201 00:38:37.381226 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-g2vz8" Dec 01 00:38:38 crc kubenswrapper[4846]: I1201 00:38:38.697218 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-g2vz8_622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b/smoketest-collectd/0.log" Dec 01 00:38:38 crc kubenswrapper[4846]: I1201 00:38:38.994525 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-g2vz8_622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b/smoketest-ceilometer/0.log" Dec 01 00:38:39 crc kubenswrapper[4846]: I1201 00:38:39.345394 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-gt9q4_5a36c0b9-4959-4f1b-a917-3e67a45c1fb4/default-interconnect/0.log" Dec 01 00:38:39 crc kubenswrapper[4846]: I1201 00:38:39.654966 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x_e6efa439-ca46-4c99-be4d-60f9b510683b/bridge/2.log" Dec 01 00:38:40 crc kubenswrapper[4846]: I1201 00:38:40.001404 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-tdt9x_e6efa439-ca46-4c99-be4d-60f9b510683b/sg-core/0.log" Dec 01 00:38:40 crc kubenswrapper[4846]: I1201 00:38:40.371036 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-fb6795fb7-crs65_a7aaf165-c4f0-4a91-9682-d92f647deec8/bridge/2.log" Dec 01 00:38:40 crc kubenswrapper[4846]: I1201 00:38:40.682284 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-fb6795fb7-crs65_a7aaf165-c4f0-4a91-9682-d92f647deec8/sg-core/0.log" Dec 01 00:38:40 crc kubenswrapper[4846]: I1201 00:38:40.987720 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2_1fce5bc7-b9ff-465c-88ee-ccb105c9f364/bridge/2.log" Dec 01 00:38:41 crc kubenswrapper[4846]: I1201 00:38:41.267225 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-fctc2_1fce5bc7-b9ff-465c-88ee-ccb105c9f364/sg-core/0.log" Dec 01 00:38:41 crc kubenswrapper[4846]: I1201 00:38:41.563838 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd_7a969c8d-e631-466c-b189-b798d0b03173/bridge/2.log" Dec 01 00:38:41 crc kubenswrapper[4846]: I1201 00:38:41.917205 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7f49956456-hkmxd_7a969c8d-e631-466c-b189-b798d0b03173/sg-core/0.log" Dec 01 00:38:42 crc kubenswrapper[4846]: I1201 00:38:42.254478 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w_97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85/bridge/2.log" Dec 01 00:38:42 crc kubenswrapper[4846]: I1201 00:38:42.575922 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-27j7w_97fd4e8d-e3bb-4a6c-b9c9-9448fab5dc85/sg-core/0.log" Dec 01 00:38:46 crc kubenswrapper[4846]: I1201 00:38:46.213516 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-95bf5898f-5jc24_df86ead6-ba68-4d18-8d2f-3e74d8b0328e/operator/0.log" Dec 01 00:38:46 crc kubenswrapper[4846]: I1201 00:38:46.532793 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_d2e6f159-62bc-42e9-94b7-5d33a34a2bbe/prometheus/0.log" Dec 01 00:38:46 crc kubenswrapper[4846]: I1201 00:38:46.848240 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_727053d3-2d7a-4a4b-8a99-3762f38c8344/elasticsearch/0.log" Dec 01 00:38:47 crc kubenswrapper[4846]: I1201 00:38:47.195957 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-k55wh_fd576764-efbc-4385-ab49-e960a95d095d/prometheus-webhook-snmp/0.log" Dec 01 00:38:47 crc kubenswrapper[4846]: I1201 00:38:47.481480 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_9d27e84f-ca2b-42fc-bbbd-2132be113b65/alertmanager/0.log" Dec 01 00:39:03 crc kubenswrapper[4846]: I1201 00:39:03.591114 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7998f748fc-66l4k_5dcb0316-9aa0-4006-8850-3d4f820f988d/operator/0.log" Dec 01 00:39:06 crc kubenswrapper[4846]: I1201 00:39:06.582937 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-95bf5898f-5jc24_df86ead6-ba68-4d18-8d2f-3e74d8b0328e/operator/0.log" Dec 01 00:39:06 crc kubenswrapper[4846]: I1201 00:39:06.861453 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_7ddd3132-7244-46c0-b832-f2c83d35b4c6/qdr/0.log" Dec 01 00:39:40 crc kubenswrapper[4846]: I1201 00:39:40.794311 4846 scope.go:117] "RemoveContainer" containerID="c3e62b669e133a14fe197d4df9f2cc24681084f892ab054ae4daf9dd27a9f664" Dec 01 00:39:40 crc kubenswrapper[4846]: I1201 00:39:40.840055 4846 scope.go:117] "RemoveContainer" containerID="85df167dd4afe5ce9cf2946786953464d542d84d59fa0e247bd2719b81625254" Dec 01 00:39:40 crc kubenswrapper[4846]: I1201 00:39:40.867364 4846 scope.go:117] "RemoveContainer" containerID="61a07c2c951cafde52a2a52fbf76181586ea6f93a60b3e03da1f13cabbe4c303" Dec 01 00:39:40 crc kubenswrapper[4846]: I1201 00:39:40.891811 4846 scope.go:117] "RemoveContainer" containerID="7f8a082eaec5f564874dbc2e5dd2d57e66c6b026a60e65edd0d533bfeffc4228" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790236 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b8f7r/must-gather-db8sq"] Dec 01 00:39:41 crc kubenswrapper[4846]: E1201 00:39:41.790607 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerName="smoketest-collectd" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790622 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerName="smoketest-collectd" Dec 01 00:39:41 crc kubenswrapper[4846]: E1201 00:39:41.790653 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ec338c-f753-48cd-b5c9-92bec6d6b2e8" containerName="curl" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790661 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ec338c-f753-48cd-b5c9-92bec6d6b2e8" containerName="curl" Dec 01 00:39:41 crc kubenswrapper[4846]: E1201 00:39:41.790700 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerName="smoketest-ceilometer" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790710 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerName="smoketest-ceilometer" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790859 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerName="smoketest-ceilometer" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790881 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="622415a7-1eb7-4d9c-bc3b-39b7e6e65f0b" containerName="smoketest-collectd" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.790890 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0ec338c-f753-48cd-b5c9-92bec6d6b2e8" containerName="curl" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.791748 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.793698 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b8f7r"/"kube-root-ca.crt" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.809837 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b8f7r/must-gather-db8sq"] Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.816596 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b8f7r"/"openshift-service-ca.crt" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.893308 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/403386ea-31a7-40cf-a922-f569d0109eb2-must-gather-output\") pod \"must-gather-db8sq\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.893363 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6d7\" (UniqueName: \"kubernetes.io/projected/403386ea-31a7-40cf-a922-f569d0109eb2-kube-api-access-vx6d7\") pod \"must-gather-db8sq\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.994364 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/403386ea-31a7-40cf-a922-f569d0109eb2-must-gather-output\") pod \"must-gather-db8sq\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.994720 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6d7\" (UniqueName: \"kubernetes.io/projected/403386ea-31a7-40cf-a922-f569d0109eb2-kube-api-access-vx6d7\") pod \"must-gather-db8sq\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:41 crc kubenswrapper[4846]: I1201 00:39:41.994999 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/403386ea-31a7-40cf-a922-f569d0109eb2-must-gather-output\") pod \"must-gather-db8sq\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:42 crc kubenswrapper[4846]: I1201 00:39:42.013819 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6d7\" (UniqueName: \"kubernetes.io/projected/403386ea-31a7-40cf-a922-f569d0109eb2-kube-api-access-vx6d7\") pod \"must-gather-db8sq\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:42 crc kubenswrapper[4846]: I1201 00:39:42.110921 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:39:42 crc kubenswrapper[4846]: I1201 00:39:42.572790 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b8f7r/must-gather-db8sq"] Dec 01 00:39:42 crc kubenswrapper[4846]: I1201 00:39:42.977989 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8f7r/must-gather-db8sq" event={"ID":"403386ea-31a7-40cf-a922-f569d0109eb2","Type":"ContainerStarted","Data":"8c2a04ec4baca2c41b39afe9c03bfaab067525a621a0798fa1a0d373dffa05d0"} Dec 01 00:39:47 crc kubenswrapper[4846]: I1201 00:39:47.019846 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8f7r/must-gather-db8sq" event={"ID":"403386ea-31a7-40cf-a922-f569d0109eb2","Type":"ContainerStarted","Data":"3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86"} Dec 01 00:39:48 crc kubenswrapper[4846]: I1201 00:39:48.040743 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8f7r/must-gather-db8sq" event={"ID":"403386ea-31a7-40cf-a922-f569d0109eb2","Type":"ContainerStarted","Data":"cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50"} Dec 01 00:39:48 crc kubenswrapper[4846]: I1201 00:39:48.063225 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b8f7r/must-gather-db8sq" podStartSLOduration=2.981230009 podStartE2EDuration="7.063209s" podCreationTimestamp="2025-12-01 00:39:41 +0000 UTC" firstStartedPulling="2025-12-01 00:39:42.575389823 +0000 UTC m=+2003.356158908" lastFinishedPulling="2025-12-01 00:39:46.657368805 +0000 UTC m=+2007.438137899" observedRunningTime="2025-12-01 00:39:48.062390914 +0000 UTC m=+2008.843159998" watchObservedRunningTime="2025-12-01 00:39:48.063209 +0000 UTC m=+2008.843978064" Dec 01 00:39:55 crc kubenswrapper[4846]: I1201 00:39:55.420261 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:39:55 crc kubenswrapper[4846]: I1201 00:39:55.420955 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.135197 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jndbc"] Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.137798 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.158314 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jndbc"] Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.229616 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmn7\" (UniqueName: \"kubernetes.io/projected/7afd0b02-2157-407b-be6c-154b6cb96fd4-kube-api-access-tcmn7\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.229673 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-utilities\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.229833 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-catalog-content\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.331487 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-catalog-content\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.331636 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmn7\" (UniqueName: \"kubernetes.io/projected/7afd0b02-2157-407b-be6c-154b6cb96fd4-kube-api-access-tcmn7\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.331670 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-utilities\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.332400 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-utilities\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.332467 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-catalog-content\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.364757 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmn7\" (UniqueName: \"kubernetes.io/projected/7afd0b02-2157-407b-be6c-154b6cb96fd4-kube-api-access-tcmn7\") pod \"certified-operators-jndbc\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.460989 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:17 crc kubenswrapper[4846]: I1201 00:40:17.926532 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jndbc"] Dec 01 00:40:18 crc kubenswrapper[4846]: I1201 00:40:18.286888 4846 generic.go:334] "Generic (PLEG): container finished" podID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerID="986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa" exitCode=0 Dec 01 00:40:18 crc kubenswrapper[4846]: I1201 00:40:18.286956 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerDied","Data":"986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa"} Dec 01 00:40:18 crc kubenswrapper[4846]: I1201 00:40:18.287006 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerStarted","Data":"361cbc9d5e98bd328efb6c76984659dc38d147de192c6e2f7d67bdeed82b1d66"} Dec 01 00:40:19 crc kubenswrapper[4846]: I1201 00:40:19.296926 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerStarted","Data":"97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3"} Dec 01 00:40:20 crc kubenswrapper[4846]: I1201 00:40:20.307239 4846 generic.go:334] "Generic (PLEG): container finished" podID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerID="97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3" exitCode=0 Dec 01 00:40:20 crc kubenswrapper[4846]: I1201 00:40:20.307352 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerDied","Data":"97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3"} Dec 01 00:40:21 crc kubenswrapper[4846]: I1201 00:40:21.320630 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerStarted","Data":"da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f"} Dec 01 00:40:21 crc kubenswrapper[4846]: I1201 00:40:21.340239 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jndbc" podStartSLOduration=1.836221343 podStartE2EDuration="4.340220824s" podCreationTimestamp="2025-12-01 00:40:17 +0000 UTC" firstStartedPulling="2025-12-01 00:40:18.288468231 +0000 UTC m=+2039.069237305" lastFinishedPulling="2025-12-01 00:40:20.792467712 +0000 UTC m=+2041.573236786" observedRunningTime="2025-12-01 00:40:21.338327774 +0000 UTC m=+2042.119096868" watchObservedRunningTime="2025-12-01 00:40:21.340220824 +0000 UTC m=+2042.120989908" Dec 01 00:40:24 crc kubenswrapper[4846]: I1201 00:40:24.874853 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-575gw"] Dec 01 00:40:24 crc kubenswrapper[4846]: I1201 00:40:24.875946 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:24 crc kubenswrapper[4846]: I1201 00:40:24.912404 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-575gw"] Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.049956 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrpp\" (UniqueName: \"kubernetes.io/projected/0d6afa75-9c08-4cc0-bd79-05ba5db083ff-kube-api-access-prrpp\") pod \"infrawatch-operators-575gw\" (UID: \"0d6afa75-9c08-4cc0-bd79-05ba5db083ff\") " pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.151139 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prrpp\" (UniqueName: \"kubernetes.io/projected/0d6afa75-9c08-4cc0-bd79-05ba5db083ff-kube-api-access-prrpp\") pod \"infrawatch-operators-575gw\" (UID: \"0d6afa75-9c08-4cc0-bd79-05ba5db083ff\") " pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.182173 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prrpp\" (UniqueName: \"kubernetes.io/projected/0d6afa75-9c08-4cc0-bd79-05ba5db083ff-kube-api-access-prrpp\") pod \"infrawatch-operators-575gw\" (UID: \"0d6afa75-9c08-4cc0-bd79-05ba5db083ff\") " pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.208662 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.419455 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.419951 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:40:25 crc kubenswrapper[4846]: I1201 00:40:25.477595 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-575gw"] Dec 01 00:40:25 crc kubenswrapper[4846]: W1201 00:40:25.491573 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d6afa75_9c08_4cc0_bd79_05ba5db083ff.slice/crio-271102625b02d5eb0cece6ca8bf638eef4ebcd5da07eb9fff4b4f2ba0ed6748b WatchSource:0}: Error finding container 271102625b02d5eb0cece6ca8bf638eef4ebcd5da07eb9fff4b4f2ba0ed6748b: Status 404 returned error can't find the container with id 271102625b02d5eb0cece6ca8bf638eef4ebcd5da07eb9fff4b4f2ba0ed6748b Dec 01 00:40:26 crc kubenswrapper[4846]: I1201 00:40:26.361949 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-575gw" event={"ID":"0d6afa75-9c08-4cc0-bd79-05ba5db083ff","Type":"ContainerStarted","Data":"620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac"} Dec 01 00:40:26 crc kubenswrapper[4846]: I1201 00:40:26.362246 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-575gw" event={"ID":"0d6afa75-9c08-4cc0-bd79-05ba5db083ff","Type":"ContainerStarted","Data":"271102625b02d5eb0cece6ca8bf638eef4ebcd5da07eb9fff4b4f2ba0ed6748b"} Dec 01 00:40:26 crc kubenswrapper[4846]: I1201 00:40:26.387041 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-575gw" podStartSLOduration=2.284261253 podStartE2EDuration="2.38701062s" podCreationTimestamp="2025-12-01 00:40:24 +0000 UTC" firstStartedPulling="2025-12-01 00:40:25.498315023 +0000 UTC m=+2046.279084097" lastFinishedPulling="2025-12-01 00:40:25.60106435 +0000 UTC m=+2046.381833464" observedRunningTime="2025-12-01 00:40:26.376159017 +0000 UTC m=+2047.156928121" watchObservedRunningTime="2025-12-01 00:40:26.38701062 +0000 UTC m=+2047.167779754" Dec 01 00:40:27 crc kubenswrapper[4846]: I1201 00:40:27.462056 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:27 crc kubenswrapper[4846]: I1201 00:40:27.462662 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:27 crc kubenswrapper[4846]: I1201 00:40:27.530083 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:28 crc kubenswrapper[4846]: I1201 00:40:28.453596 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.306190 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jndbc"] Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.405251 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jndbc" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="registry-server" containerID="cri-o://da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f" gracePeriod=2 Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.869423 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.953382 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-utilities\") pod \"7afd0b02-2157-407b-be6c-154b6cb96fd4\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.953549 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcmn7\" (UniqueName: \"kubernetes.io/projected/7afd0b02-2157-407b-be6c-154b6cb96fd4-kube-api-access-tcmn7\") pod \"7afd0b02-2157-407b-be6c-154b6cb96fd4\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.953599 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-catalog-content\") pod \"7afd0b02-2157-407b-be6c-154b6cb96fd4\" (UID: \"7afd0b02-2157-407b-be6c-154b6cb96fd4\") " Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.954301 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-utilities" (OuterVolumeSpecName: "utilities") pod "7afd0b02-2157-407b-be6c-154b6cb96fd4" (UID: "7afd0b02-2157-407b-be6c-154b6cb96fd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:40:31 crc kubenswrapper[4846]: I1201 00:40:31.959448 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afd0b02-2157-407b-be6c-154b6cb96fd4-kube-api-access-tcmn7" (OuterVolumeSpecName: "kube-api-access-tcmn7") pod "7afd0b02-2157-407b-be6c-154b6cb96fd4" (UID: "7afd0b02-2157-407b-be6c-154b6cb96fd4"). InnerVolumeSpecName "kube-api-access-tcmn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.012013 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7afd0b02-2157-407b-be6c-154b6cb96fd4" (UID: "7afd0b02-2157-407b-be6c-154b6cb96fd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.055737 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.055781 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcmn7\" (UniqueName: \"kubernetes.io/projected/7afd0b02-2157-407b-be6c-154b6cb96fd4-kube-api-access-tcmn7\") on node \"crc\" DevicePath \"\"" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.055796 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afd0b02-2157-407b-be6c-154b6cb96fd4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.440828 4846 generic.go:334] "Generic (PLEG): container finished" podID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerID="da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f" exitCode=0 Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.440898 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerDied","Data":"da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f"} Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.440940 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndbc" event={"ID":"7afd0b02-2157-407b-be6c-154b6cb96fd4","Type":"ContainerDied","Data":"361cbc9d5e98bd328efb6c76984659dc38d147de192c6e2f7d67bdeed82b1d66"} Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.441008 4846 scope.go:117] "RemoveContainer" containerID="da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.441220 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndbc" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.468131 4846 scope.go:117] "RemoveContainer" containerID="97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.489756 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jndbc"] Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.499258 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jndbc"] Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.508065 4846 scope.go:117] "RemoveContainer" containerID="986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.531054 4846 scope.go:117] "RemoveContainer" containerID="da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f" Dec 01 00:40:32 crc kubenswrapper[4846]: E1201 00:40:32.531569 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f\": container with ID starting with da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f not found: ID does not exist" containerID="da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.531628 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f"} err="failed to get container status \"da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f\": rpc error: code = NotFound desc = could not find container \"da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f\": container with ID starting with da838027244b298744aa607b00e5774d4cf584aa4ad96c079027a9ef6148925f not found: ID does not exist" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.531660 4846 scope.go:117] "RemoveContainer" containerID="97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3" Dec 01 00:40:32 crc kubenswrapper[4846]: E1201 00:40:32.532031 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3\": container with ID starting with 97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3 not found: ID does not exist" containerID="97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.532081 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3"} err="failed to get container status \"97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3\": rpc error: code = NotFound desc = could not find container \"97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3\": container with ID starting with 97800663114fa2699021c4e7583b657c583b6ee70eaa36a7c71754e17fda9eb3 not found: ID does not exist" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.532118 4846 scope.go:117] "RemoveContainer" containerID="986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa" Dec 01 00:40:32 crc kubenswrapper[4846]: E1201 00:40:32.532577 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa\": container with ID starting with 986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa not found: ID does not exist" containerID="986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa" Dec 01 00:40:32 crc kubenswrapper[4846]: I1201 00:40:32.532603 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa"} err="failed to get container status \"986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa\": rpc error: code = NotFound desc = could not find container \"986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa\": container with ID starting with 986a7a903a334703efc42c7ed1e353bcf88b87a0b1fbb0a98a4cd444218c93fa not found: ID does not exist" Dec 01 00:40:33 crc kubenswrapper[4846]: I1201 00:40:33.199955 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qjvv5_2a047a1e-0e7f-474d-8026-71f3cb40d657/control-plane-machine-set-operator/0.log" Dec 01 00:40:33 crc kubenswrapper[4846]: I1201 00:40:33.348961 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wlrts_6d8ea8a6-45fc-461a-8ce6-f317ff37eac9/kube-rbac-proxy/0.log" Dec 01 00:40:33 crc kubenswrapper[4846]: I1201 00:40:33.382445 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wlrts_6d8ea8a6-45fc-461a-8ce6-f317ff37eac9/machine-api-operator/0.log" Dec 01 00:40:33 crc kubenswrapper[4846]: I1201 00:40:33.589350 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" path="/var/lib/kubelet/pods/7afd0b02-2157-407b-be6c-154b6cb96fd4/volumes" Dec 01 00:40:35 crc kubenswrapper[4846]: I1201 00:40:35.210571 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:35 crc kubenswrapper[4846]: I1201 00:40:35.210642 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:35 crc kubenswrapper[4846]: I1201 00:40:35.253825 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:35 crc kubenswrapper[4846]: I1201 00:40:35.505942 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:37 crc kubenswrapper[4846]: I1201 00:40:37.300312 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-575gw"] Dec 01 00:40:37 crc kubenswrapper[4846]: I1201 00:40:37.480389 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-575gw" podUID="0d6afa75-9c08-4cc0-bd79-05ba5db083ff" containerName="registry-server" containerID="cri-o://620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac" gracePeriod=2 Dec 01 00:40:37 crc kubenswrapper[4846]: I1201 00:40:37.949742 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.048705 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prrpp\" (UniqueName: \"kubernetes.io/projected/0d6afa75-9c08-4cc0-bd79-05ba5db083ff-kube-api-access-prrpp\") pod \"0d6afa75-9c08-4cc0-bd79-05ba5db083ff\" (UID: \"0d6afa75-9c08-4cc0-bd79-05ba5db083ff\") " Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.055023 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6afa75-9c08-4cc0-bd79-05ba5db083ff-kube-api-access-prrpp" (OuterVolumeSpecName: "kube-api-access-prrpp") pod "0d6afa75-9c08-4cc0-bd79-05ba5db083ff" (UID: "0d6afa75-9c08-4cc0-bd79-05ba5db083ff"). InnerVolumeSpecName "kube-api-access-prrpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.150026 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prrpp\" (UniqueName: \"kubernetes.io/projected/0d6afa75-9c08-4cc0-bd79-05ba5db083ff-kube-api-access-prrpp\") on node \"crc\" DevicePath \"\"" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.495100 4846 generic.go:334] "Generic (PLEG): container finished" podID="0d6afa75-9c08-4cc0-bd79-05ba5db083ff" containerID="620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac" exitCode=0 Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.495156 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-575gw" event={"ID":"0d6afa75-9c08-4cc0-bd79-05ba5db083ff","Type":"ContainerDied","Data":"620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac"} Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.495179 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-575gw" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.495194 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-575gw" event={"ID":"0d6afa75-9c08-4cc0-bd79-05ba5db083ff","Type":"ContainerDied","Data":"271102625b02d5eb0cece6ca8bf638eef4ebcd5da07eb9fff4b4f2ba0ed6748b"} Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.495224 4846 scope.go:117] "RemoveContainer" containerID="620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.526511 4846 scope.go:117] "RemoveContainer" containerID="620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac" Dec 01 00:40:38 crc kubenswrapper[4846]: E1201 00:40:38.527309 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac\": container with ID starting with 620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac not found: ID does not exist" containerID="620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.527374 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac"} err="failed to get container status \"620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac\": rpc error: code = NotFound desc = could not find container \"620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac\": container with ID starting with 620b273f5d707b4b8b4d357b175e89e15e5a3c545a40641494b2845171e647ac not found: ID does not exist" Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.551226 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-575gw"] Dec 01 00:40:38 crc kubenswrapper[4846]: I1201 00:40:38.562961 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-575gw"] Dec 01 00:40:39 crc kubenswrapper[4846]: I1201 00:40:39.595416 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6afa75-9c08-4cc0-bd79-05ba5db083ff" path="/var/lib/kubelet/pods/0d6afa75-9c08-4cc0-bd79-05ba5db083ff/volumes" Dec 01 00:40:46 crc kubenswrapper[4846]: I1201 00:40:46.967711 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-ztlvf_bcd8f297-7a23-41d8-b71f-eb10e7f7ead3/cert-manager-controller/0.log" Dec 01 00:40:47 crc kubenswrapper[4846]: I1201 00:40:47.097382 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-jkskj_6a8cb7d0-8968-4cfe-a4f6-89d488b15dc4/cert-manager-cainjector/0.log" Dec 01 00:40:47 crc kubenswrapper[4846]: I1201 00:40:47.144360 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-4plm6_0c7bbe0f-b67d-4752-9d38-76d426929bfa/cert-manager-webhook/0.log" Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.420397 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.420996 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.421058 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.421869 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"981733e1be260f0441574d3fe79b38aacc624e168f3bcd248d3314e930604c25"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.421938 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://981733e1be260f0441574d3fe79b38aacc624e168f3bcd248d3314e930604c25" gracePeriod=600 Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.651512 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="981733e1be260f0441574d3fe79b38aacc624e168f3bcd248d3314e930604c25" exitCode=0 Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.651606 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"981733e1be260f0441574d3fe79b38aacc624e168f3bcd248d3314e930604c25"} Dec 01 00:40:55 crc kubenswrapper[4846]: I1201 00:40:55.653195 4846 scope.go:117] "RemoveContainer" containerID="de23a55f09b246fdc09261ef62046d9d2a7f0aa1f1b692f805d44e9a5a9fe6e6" Dec 01 00:40:56 crc kubenswrapper[4846]: I1201 00:40:56.663660 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerStarted","Data":"57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e"} Dec 01 00:41:04 crc kubenswrapper[4846]: I1201 00:41:04.526416 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/util/0.log" Dec 01 00:41:04 crc kubenswrapper[4846]: I1201 00:41:04.722869 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/pull/0.log" Dec 01 00:41:04 crc kubenswrapper[4846]: I1201 00:41:04.736461 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/util/0.log" Dec 01 00:41:04 crc kubenswrapper[4846]: I1201 00:41:04.736906 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/pull/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.002518 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/util/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.040050 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/extract/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.049147 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azqtg7_018280e5-7f09-4ed8-81b4-0c26013fa732/pull/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.181646 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/util/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.339094 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/pull/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.339429 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/util/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.440401 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/pull/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.557902 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/extract/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.560319 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/util/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.566259 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lqzg_c5b5d881-fbb7-405c-92d5-f00c25d0c405/pull/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.751960 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/util/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.908539 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/util/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.951841 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/pull/0.log" Dec 01 00:41:05 crc kubenswrapper[4846]: I1201 00:41:05.961498 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/pull/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.113106 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/extract/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.172834 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/pull/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.191978 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f6b9bn_d68370f6-b067-4f78-b8c2-6ed2892b65ae/util/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.305660 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/util/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.514403 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/pull/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.518581 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/util/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.560454 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/pull/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.767938 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/util/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.805938 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/pull/0.log" Dec 01 00:41:06 crc kubenswrapper[4846]: I1201 00:41:06.842701 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e7s6kn_60809e12-6548-49c7-9873-a97d3603e686/extract/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.023898 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/extract-utilities/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.196060 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/extract-utilities/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.202563 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/extract-content/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.241362 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/extract-content/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.520883 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/extract-content/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.542992 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/extract-utilities/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.731243 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/extract-utilities/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.892418 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9l658_a007a56c-cd63-4539-9315-a48129a2f363/registry-server/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.906333 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/extract-utilities/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.933088 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/extract-content/0.log" Dec 01 00:41:07 crc kubenswrapper[4846]: I1201 00:41:07.959201 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/extract-content/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.112840 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/extract-content/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.137845 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/extract-utilities/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.177119 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-pv8kk_ffcd812b-4d10-421c-a280-abb861c27dac/marketplace-operator/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.370750 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/extract-utilities/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.432544 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dcz9z_d6de706a-4254-4a83-a9dd-b5ce1da2b907/registry-server/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.500475 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/extract-content/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.504388 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/extract-utilities/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.531901 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/extract-content/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.697576 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/extract-utilities/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.712278 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/extract-content/0.log" Dec 01 00:41:08 crc kubenswrapper[4846]: I1201 00:41:08.953316 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-565r9_97155d55-621b-4102-9ec2-1257be3341d7/registry-server/0.log" Dec 01 00:41:22 crc kubenswrapper[4846]: I1201 00:41:22.793872 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-vq7fk_8fe16b50-7f2c-4aac-80a8-ba42ea8c21c9/prometheus-operator/0.log" Dec 01 00:41:22 crc kubenswrapper[4846]: I1201 00:41:22.929850 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d77d85664-nwg9s_8e07cf8a-8c81-462f-9e58-b29e9260dc71/prometheus-operator-admission-webhook/0.log" Dec 01 00:41:23 crc kubenswrapper[4846]: I1201 00:41:23.001494 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d77d85664-rqzsq_1b7c7b93-7478-44f8-ab2f-5891c64d630a/prometheus-operator-admission-webhook/0.log" Dec 01 00:41:23 crc kubenswrapper[4846]: I1201 00:41:23.115735 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-d2dpt_4d309dd9-6adf-466f-a167-b1c57b2089d4/operator/0.log" Dec 01 00:41:23 crc kubenswrapper[4846]: I1201 00:41:23.225898 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-dv92l_0c431140-8c85-47ad-b896-921fad1ac609/perses-operator/0.log" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.909771 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pqb68"] Dec 01 00:41:29 crc kubenswrapper[4846]: E1201 00:41:29.910508 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="extract-utilities" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.910521 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="extract-utilities" Dec 01 00:41:29 crc kubenswrapper[4846]: E1201 00:41:29.910533 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="registry-server" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.910539 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="registry-server" Dec 01 00:41:29 crc kubenswrapper[4846]: E1201 00:41:29.910550 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6afa75-9c08-4cc0-bd79-05ba5db083ff" containerName="registry-server" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.910557 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6afa75-9c08-4cc0-bd79-05ba5db083ff" containerName="registry-server" Dec 01 00:41:29 crc kubenswrapper[4846]: E1201 00:41:29.910571 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="extract-content" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.910577 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="extract-content" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.910709 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afd0b02-2157-407b-be6c-154b6cb96fd4" containerName="registry-server" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.910723 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6afa75-9c08-4cc0-bd79-05ba5db083ff" containerName="registry-server" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.911601 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.927333 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-utilities\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.927602 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms95d\" (UniqueName: \"kubernetes.io/projected/d7659680-e47d-43d7-b9f6-b303ebaf38d2-kube-api-access-ms95d\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.927707 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-catalog-content\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:29 crc kubenswrapper[4846]: I1201 00:41:29.932846 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqb68"] Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.028841 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms95d\" (UniqueName: \"kubernetes.io/projected/d7659680-e47d-43d7-b9f6-b303ebaf38d2-kube-api-access-ms95d\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.028997 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-catalog-content\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.029110 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-utilities\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.029662 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-utilities\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.029725 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-catalog-content\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.053919 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms95d\" (UniqueName: \"kubernetes.io/projected/d7659680-e47d-43d7-b9f6-b303ebaf38d2-kube-api-access-ms95d\") pod \"redhat-operators-pqb68\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.246570 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.499773 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqb68"] Dec 01 00:41:30 crc kubenswrapper[4846]: W1201 00:41:30.509828 4846 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7659680_e47d_43d7_b9f6_b303ebaf38d2.slice/crio-80f9065fc40282d53d3bb9b66bb5d988c235a8554d49e9799b4ed33493912ee6 WatchSource:0}: Error finding container 80f9065fc40282d53d3bb9b66bb5d988c235a8554d49e9799b4ed33493912ee6: Status 404 returned error can't find the container with id 80f9065fc40282d53d3bb9b66bb5d988c235a8554d49e9799b4ed33493912ee6 Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.941476 4846 generic.go:334] "Generic (PLEG): container finished" podID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerID="101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3" exitCode=0 Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.941532 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerDied","Data":"101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3"} Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.941724 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerStarted","Data":"80f9065fc40282d53d3bb9b66bb5d988c235a8554d49e9799b4ed33493912ee6"} Dec 01 00:41:30 crc kubenswrapper[4846]: I1201 00:41:30.943162 4846 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 01 00:41:31 crc kubenswrapper[4846]: I1201 00:41:31.951213 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerStarted","Data":"019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b"} Dec 01 00:41:32 crc kubenswrapper[4846]: I1201 00:41:32.962089 4846 generic.go:334] "Generic (PLEG): container finished" podID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerID="019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b" exitCode=0 Dec 01 00:41:32 crc kubenswrapper[4846]: I1201 00:41:32.962472 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerDied","Data":"019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b"} Dec 01 00:41:34 crc kubenswrapper[4846]: I1201 00:41:34.990609 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerStarted","Data":"5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a"} Dec 01 00:41:35 crc kubenswrapper[4846]: I1201 00:41:35.018613 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pqb68" podStartSLOduration=3.103546381 podStartE2EDuration="6.018594224s" podCreationTimestamp="2025-12-01 00:41:29 +0000 UTC" firstStartedPulling="2025-12-01 00:41:30.942960811 +0000 UTC m=+2111.723729885" lastFinishedPulling="2025-12-01 00:41:33.858008624 +0000 UTC m=+2114.638777728" observedRunningTime="2025-12-01 00:41:35.009470017 +0000 UTC m=+2115.790239091" watchObservedRunningTime="2025-12-01 00:41:35.018594224 +0000 UTC m=+2115.799363308" Dec 01 00:41:40 crc kubenswrapper[4846]: I1201 00:41:40.247430 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:40 crc kubenswrapper[4846]: I1201 00:41:40.249366 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:41 crc kubenswrapper[4846]: I1201 00:41:41.327165 4846 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqb68" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="registry-server" probeResult="failure" output=< Dec 01 00:41:41 crc kubenswrapper[4846]: timeout: failed to connect service ":50051" within 1s Dec 01 00:41:41 crc kubenswrapper[4846]: > Dec 01 00:41:50 crc kubenswrapper[4846]: I1201 00:41:50.310634 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:50 crc kubenswrapper[4846]: I1201 00:41:50.391052 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:50 crc kubenswrapper[4846]: I1201 00:41:50.561482 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqb68"] Dec 01 00:41:52 crc kubenswrapper[4846]: I1201 00:41:52.166634 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pqb68" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="registry-server" containerID="cri-o://5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a" gracePeriod=2 Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.132389 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.176655 4846 generic.go:334] "Generic (PLEG): container finished" podID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerID="5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a" exitCode=0 Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.176714 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerDied","Data":"5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a"} Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.176740 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqb68" event={"ID":"d7659680-e47d-43d7-b9f6-b303ebaf38d2","Type":"ContainerDied","Data":"80f9065fc40282d53d3bb9b66bb5d988c235a8554d49e9799b4ed33493912ee6"} Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.176757 4846 scope.go:117] "RemoveContainer" containerID="5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.176866 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqb68" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.181654 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-catalog-content\") pod \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.181796 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-utilities\") pod \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.181842 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms95d\" (UniqueName: \"kubernetes.io/projected/d7659680-e47d-43d7-b9f6-b303ebaf38d2-kube-api-access-ms95d\") pod \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\" (UID: \"d7659680-e47d-43d7-b9f6-b303ebaf38d2\") " Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.183840 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-utilities" (OuterVolumeSpecName: "utilities") pod "d7659680-e47d-43d7-b9f6-b303ebaf38d2" (UID: "d7659680-e47d-43d7-b9f6-b303ebaf38d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.189558 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7659680-e47d-43d7-b9f6-b303ebaf38d2-kube-api-access-ms95d" (OuterVolumeSpecName: "kube-api-access-ms95d") pod "d7659680-e47d-43d7-b9f6-b303ebaf38d2" (UID: "d7659680-e47d-43d7-b9f6-b303ebaf38d2"). InnerVolumeSpecName "kube-api-access-ms95d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.206109 4846 scope.go:117] "RemoveContainer" containerID="019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.233535 4846 scope.go:117] "RemoveContainer" containerID="101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.257019 4846 scope.go:117] "RemoveContainer" containerID="5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a" Dec 01 00:41:53 crc kubenswrapper[4846]: E1201 00:41:53.257535 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a\": container with ID starting with 5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a not found: ID does not exist" containerID="5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.257599 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a"} err="failed to get container status \"5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a\": rpc error: code = NotFound desc = could not find container \"5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a\": container with ID starting with 5c6a4ccf970c89a5c568710071f420e90ca9a37d382d44e640cab7f971bacf4a not found: ID does not exist" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.257644 4846 scope.go:117] "RemoveContainer" containerID="019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b" Dec 01 00:41:53 crc kubenswrapper[4846]: E1201 00:41:53.258096 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b\": container with ID starting with 019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b not found: ID does not exist" containerID="019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.258139 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b"} err="failed to get container status \"019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b\": rpc error: code = NotFound desc = could not find container \"019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b\": container with ID starting with 019c3b0673c28afde5a525759eb414ac7bda48440ae124822e93dfb6bcd5ff0b not found: ID does not exist" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.258167 4846 scope.go:117] "RemoveContainer" containerID="101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3" Dec 01 00:41:53 crc kubenswrapper[4846]: E1201 00:41:53.258638 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3\": container with ID starting with 101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3 not found: ID does not exist" containerID="101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.258699 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3"} err="failed to get container status \"101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3\": rpc error: code = NotFound desc = could not find container \"101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3\": container with ID starting with 101b5b6ac5064b9c6b03bbb85f9ddc3d0a027f646ec121cac46ea57c4443d6f3 not found: ID does not exist" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.284592 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.284645 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms95d\" (UniqueName: \"kubernetes.io/projected/d7659680-e47d-43d7-b9f6-b303ebaf38d2-kube-api-access-ms95d\") on node \"crc\" DevicePath \"\"" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.300388 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7659680-e47d-43d7-b9f6-b303ebaf38d2" (UID: "d7659680-e47d-43d7-b9f6-b303ebaf38d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.385636 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7659680-e47d-43d7-b9f6-b303ebaf38d2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.512724 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqb68"] Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.518619 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pqb68"] Dec 01 00:41:53 crc kubenswrapper[4846]: I1201 00:41:53.596235 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" path="/var/lib/kubelet/pods/d7659680-e47d-43d7-b9f6-b303ebaf38d2/volumes" Dec 01 00:42:08 crc kubenswrapper[4846]: I1201 00:42:08.352114 4846 generic.go:334] "Generic (PLEG): container finished" podID="403386ea-31a7-40cf-a922-f569d0109eb2" containerID="3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86" exitCode=0 Dec 01 00:42:08 crc kubenswrapper[4846]: I1201 00:42:08.352218 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b8f7r/must-gather-db8sq" event={"ID":"403386ea-31a7-40cf-a922-f569d0109eb2","Type":"ContainerDied","Data":"3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86"} Dec 01 00:42:08 crc kubenswrapper[4846]: I1201 00:42:08.353822 4846 scope.go:117] "RemoveContainer" containerID="3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86" Dec 01 00:42:09 crc kubenswrapper[4846]: I1201 00:42:09.008838 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b8f7r_must-gather-db8sq_403386ea-31a7-40cf-a922-f569d0109eb2/gather/0.log" Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.432013 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b8f7r/must-gather-db8sq"] Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.432661 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-b8f7r/must-gather-db8sq" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="copy" containerID="cri-o://cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50" gracePeriod=2 Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.439014 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b8f7r/must-gather-db8sq"] Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.917020 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b8f7r_must-gather-db8sq_403386ea-31a7-40cf-a922-f569d0109eb2/copy/0.log" Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.917590 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.950772 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/403386ea-31a7-40cf-a922-f569d0109eb2-must-gather-output\") pod \"403386ea-31a7-40cf-a922-f569d0109eb2\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.961362 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx6d7\" (UniqueName: \"kubernetes.io/projected/403386ea-31a7-40cf-a922-f569d0109eb2-kube-api-access-vx6d7\") pod \"403386ea-31a7-40cf-a922-f569d0109eb2\" (UID: \"403386ea-31a7-40cf-a922-f569d0109eb2\") " Dec 01 00:42:15 crc kubenswrapper[4846]: I1201 00:42:15.967137 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/403386ea-31a7-40cf-a922-f569d0109eb2-kube-api-access-vx6d7" (OuterVolumeSpecName: "kube-api-access-vx6d7") pod "403386ea-31a7-40cf-a922-f569d0109eb2" (UID: "403386ea-31a7-40cf-a922-f569d0109eb2"). InnerVolumeSpecName "kube-api-access-vx6d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.011588 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/403386ea-31a7-40cf-a922-f569d0109eb2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "403386ea-31a7-40cf-a922-f569d0109eb2" (UID: "403386ea-31a7-40cf-a922-f569d0109eb2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.063363 4846 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/403386ea-31a7-40cf-a922-f569d0109eb2-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.063831 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx6d7\" (UniqueName: \"kubernetes.io/projected/403386ea-31a7-40cf-a922-f569d0109eb2-kube-api-access-vx6d7\") on node \"crc\" DevicePath \"\"" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.422588 4846 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b8f7r_must-gather-db8sq_403386ea-31a7-40cf-a922-f569d0109eb2/copy/0.log" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.423482 4846 generic.go:334] "Generic (PLEG): container finished" podID="403386ea-31a7-40cf-a922-f569d0109eb2" containerID="cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50" exitCode=143 Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.423528 4846 scope.go:117] "RemoveContainer" containerID="cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.423538 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b8f7r/must-gather-db8sq" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.447974 4846 scope.go:117] "RemoveContainer" containerID="3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.499092 4846 scope.go:117] "RemoveContainer" containerID="cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50" Dec 01 00:42:16 crc kubenswrapper[4846]: E1201 00:42:16.499653 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50\": container with ID starting with cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50 not found: ID does not exist" containerID="cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.499710 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50"} err="failed to get container status \"cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50\": rpc error: code = NotFound desc = could not find container \"cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50\": container with ID starting with cab623278f2e92d4480cc218420a85dc1989655be01f532bc333088fdb8e7e50 not found: ID does not exist" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.499739 4846 scope.go:117] "RemoveContainer" containerID="3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86" Dec 01 00:42:16 crc kubenswrapper[4846]: E1201 00:42:16.500238 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86\": container with ID starting with 3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86 not found: ID does not exist" containerID="3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86" Dec 01 00:42:16 crc kubenswrapper[4846]: I1201 00:42:16.500302 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86"} err="failed to get container status \"3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86\": rpc error: code = NotFound desc = could not find container \"3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86\": container with ID starting with 3464799c2b37751645784d41e141d31a657d70b88e76bedb985b1c7163cd3f86 not found: ID does not exist" Dec 01 00:42:17 crc kubenswrapper[4846]: I1201 00:42:17.596117 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" path="/var/lib/kubelet/pods/403386ea-31a7-40cf-a922-f569d0109eb2/volumes" Dec 01 00:42:55 crc kubenswrapper[4846]: I1201 00:42:55.420544 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:42:55 crc kubenswrapper[4846]: I1201 00:42:55.421130 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:43:25 crc kubenswrapper[4846]: I1201 00:43:25.420454 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:43:25 crc kubenswrapper[4846]: I1201 00:43:25.421168 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.288935 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhrk8"] Dec 01 00:43:28 crc kubenswrapper[4846]: E1201 00:43:28.290224 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="extract-utilities" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290243 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="extract-utilities" Dec 01 00:43:28 crc kubenswrapper[4846]: E1201 00:43:28.290268 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="registry-server" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290276 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="registry-server" Dec 01 00:43:28 crc kubenswrapper[4846]: E1201 00:43:28.290295 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="gather" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290303 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="gather" Dec 01 00:43:28 crc kubenswrapper[4846]: E1201 00:43:28.290316 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="extract-content" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290323 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="extract-content" Dec 01 00:43:28 crc kubenswrapper[4846]: E1201 00:43:28.290338 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="copy" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290345 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="copy" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290491 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="copy" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290513 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="403386ea-31a7-40cf-a922-f569d0109eb2" containerName="gather" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.290533 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7659680-e47d-43d7-b9f6-b303ebaf38d2" containerName="registry-server" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.291783 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.303715 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhrk8"] Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.365794 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt58m\" (UniqueName: \"kubernetes.io/projected/429228e4-7e3d-4061-9045-0e37f9b7bc83-kube-api-access-lt58m\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.365852 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-utilities\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.366009 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-catalog-content\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.467508 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-catalog-content\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.467645 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt58m\" (UniqueName: \"kubernetes.io/projected/429228e4-7e3d-4061-9045-0e37f9b7bc83-kube-api-access-lt58m\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.467710 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-utilities\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.468171 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-catalog-content\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.468231 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-utilities\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.487999 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt58m\" (UniqueName: \"kubernetes.io/projected/429228e4-7e3d-4061-9045-0e37f9b7bc83-kube-api-access-lt58m\") pod \"community-operators-zhrk8\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.620988 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:28 crc kubenswrapper[4846]: I1201 00:43:28.866328 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhrk8"] Dec 01 00:43:29 crc kubenswrapper[4846]: I1201 00:43:29.098840 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerStarted","Data":"92b433a17175c3a770f63a40bbd848e83b175db37f8654efcf77493296b70059"} Dec 01 00:43:30 crc kubenswrapper[4846]: I1201 00:43:30.112338 4846 generic.go:334] "Generic (PLEG): container finished" podID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerID="d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe" exitCode=0 Dec 01 00:43:30 crc kubenswrapper[4846]: I1201 00:43:30.112446 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerDied","Data":"d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe"} Dec 01 00:43:31 crc kubenswrapper[4846]: I1201 00:43:31.125443 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerStarted","Data":"d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6"} Dec 01 00:43:32 crc kubenswrapper[4846]: I1201 00:43:32.140655 4846 generic.go:334] "Generic (PLEG): container finished" podID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerID="d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6" exitCode=0 Dec 01 00:43:32 crc kubenswrapper[4846]: I1201 00:43:32.140768 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerDied","Data":"d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6"} Dec 01 00:43:33 crc kubenswrapper[4846]: I1201 00:43:33.151300 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerStarted","Data":"6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35"} Dec 01 00:43:33 crc kubenswrapper[4846]: I1201 00:43:33.172642 4846 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhrk8" podStartSLOduration=2.6684838060000002 podStartE2EDuration="5.172621482s" podCreationTimestamp="2025-12-01 00:43:28 +0000 UTC" firstStartedPulling="2025-12-01 00:43:30.1174383 +0000 UTC m=+2230.898207384" lastFinishedPulling="2025-12-01 00:43:32.621575986 +0000 UTC m=+2233.402345060" observedRunningTime="2025-12-01 00:43:33.169267495 +0000 UTC m=+2233.950036609" watchObservedRunningTime="2025-12-01 00:43:33.172621482 +0000 UTC m=+2233.953390566" Dec 01 00:43:38 crc kubenswrapper[4846]: I1201 00:43:38.621287 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:38 crc kubenswrapper[4846]: I1201 00:43:38.622066 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:38 crc kubenswrapper[4846]: I1201 00:43:38.701007 4846 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:39 crc kubenswrapper[4846]: I1201 00:43:39.296187 4846 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:39 crc kubenswrapper[4846]: I1201 00:43:39.359908 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhrk8"] Dec 01 00:43:41 crc kubenswrapper[4846]: I1201 00:43:41.239855 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zhrk8" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="registry-server" containerID="cri-o://6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35" gracePeriod=2 Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.220468 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.252223 4846 generic.go:334] "Generic (PLEG): container finished" podID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerID="6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35" exitCode=0 Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.252273 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerDied","Data":"6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35"} Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.252306 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhrk8" event={"ID":"429228e4-7e3d-4061-9045-0e37f9b7bc83","Type":"ContainerDied","Data":"92b433a17175c3a770f63a40bbd848e83b175db37f8654efcf77493296b70059"} Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.252329 4846 scope.go:117] "RemoveContainer" containerID="6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.252364 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhrk8" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.269935 4846 scope.go:117] "RemoveContainer" containerID="d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.298816 4846 scope.go:117] "RemoveContainer" containerID="d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.321532 4846 scope.go:117] "RemoveContainer" containerID="6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35" Dec 01 00:43:42 crc kubenswrapper[4846]: E1201 00:43:42.321970 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35\": container with ID starting with 6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35 not found: ID does not exist" containerID="6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.322011 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35"} err="failed to get container status \"6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35\": rpc error: code = NotFound desc = could not find container \"6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35\": container with ID starting with 6ee61532df2e1ca88ecbad9afe894d8ba5da21d5c923ae7564afd688f3dd6f35 not found: ID does not exist" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.322042 4846 scope.go:117] "RemoveContainer" containerID="d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6" Dec 01 00:43:42 crc kubenswrapper[4846]: E1201 00:43:42.322359 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6\": container with ID starting with d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6 not found: ID does not exist" containerID="d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.322428 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6"} err="failed to get container status \"d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6\": rpc error: code = NotFound desc = could not find container \"d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6\": container with ID starting with d1ec4dc5696b44557b4c29445becb72ea73d8f7dc25065b6ab655dcc530ac3c6 not found: ID does not exist" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.322469 4846 scope.go:117] "RemoveContainer" containerID="d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe" Dec 01 00:43:42 crc kubenswrapper[4846]: E1201 00:43:42.323262 4846 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe\": container with ID starting with d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe not found: ID does not exist" containerID="d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.323333 4846 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe"} err="failed to get container status \"d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe\": rpc error: code = NotFound desc = could not find container \"d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe\": container with ID starting with d2a196d186f72594007074df5e1446dc34ec379ea5247c5838cc6e3a99888bbe not found: ID does not exist" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.418323 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-utilities\") pod \"429228e4-7e3d-4061-9045-0e37f9b7bc83\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.418458 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-catalog-content\") pod \"429228e4-7e3d-4061-9045-0e37f9b7bc83\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.418523 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt58m\" (UniqueName: \"kubernetes.io/projected/429228e4-7e3d-4061-9045-0e37f9b7bc83-kube-api-access-lt58m\") pod \"429228e4-7e3d-4061-9045-0e37f9b7bc83\" (UID: \"429228e4-7e3d-4061-9045-0e37f9b7bc83\") " Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.420187 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-utilities" (OuterVolumeSpecName: "utilities") pod "429228e4-7e3d-4061-9045-0e37f9b7bc83" (UID: "429228e4-7e3d-4061-9045-0e37f9b7bc83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.428542 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429228e4-7e3d-4061-9045-0e37f9b7bc83-kube-api-access-lt58m" (OuterVolumeSpecName: "kube-api-access-lt58m") pod "429228e4-7e3d-4061-9045-0e37f9b7bc83" (UID: "429228e4-7e3d-4061-9045-0e37f9b7bc83"). InnerVolumeSpecName "kube-api-access-lt58m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.509544 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "429228e4-7e3d-4061-9045-0e37f9b7bc83" (UID: "429228e4-7e3d-4061-9045-0e37f9b7bc83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.520434 4846 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-utilities\") on node \"crc\" DevicePath \"\"" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.520473 4846 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/429228e4-7e3d-4061-9045-0e37f9b7bc83-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.520487 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt58m\" (UniqueName: \"kubernetes.io/projected/429228e4-7e3d-4061-9045-0e37f9b7bc83-kube-api-access-lt58m\") on node \"crc\" DevicePath \"\"" Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.595347 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhrk8"] Dec 01 00:43:42 crc kubenswrapper[4846]: I1201 00:43:42.606439 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zhrk8"] Dec 01 00:43:43 crc kubenswrapper[4846]: I1201 00:43:43.591891 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" path="/var/lib/kubelet/pods/429228e4-7e3d-4061-9045-0e37f9b7bc83/volumes" Dec 01 00:43:55 crc kubenswrapper[4846]: I1201 00:43:55.420146 4846 patch_prober.go:28] interesting pod/machine-config-daemon-grqqg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 01 00:43:55 crc kubenswrapper[4846]: I1201 00:43:55.420985 4846 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 01 00:43:55 crc kubenswrapper[4846]: I1201 00:43:55.421049 4846 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" Dec 01 00:43:55 crc kubenswrapper[4846]: I1201 00:43:55.421873 4846 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e"} pod="openshift-machine-config-operator/machine-config-daemon-grqqg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 01 00:43:55 crc kubenswrapper[4846]: I1201 00:43:55.422027 4846 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerName="machine-config-daemon" containerID="cri-o://57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" gracePeriod=600 Dec 01 00:43:55 crc kubenswrapper[4846]: E1201 00:43:55.579651 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:43:56 crc kubenswrapper[4846]: I1201 00:43:56.434587 4846 generic.go:334] "Generic (PLEG): container finished" podID="d981647e-2c46-4ad1-afd7-757ef36643f8" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" exitCode=0 Dec 01 00:43:56 crc kubenswrapper[4846]: I1201 00:43:56.434724 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" event={"ID":"d981647e-2c46-4ad1-afd7-757ef36643f8","Type":"ContainerDied","Data":"57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e"} Dec 01 00:43:56 crc kubenswrapper[4846]: I1201 00:43:56.435089 4846 scope.go:117] "RemoveContainer" containerID="981733e1be260f0441574d3fe79b38aacc624e168f3bcd248d3314e930604c25" Dec 01 00:43:56 crc kubenswrapper[4846]: I1201 00:43:56.436138 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:43:56 crc kubenswrapper[4846]: E1201 00:43:56.436564 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:44:10 crc kubenswrapper[4846]: I1201 00:44:10.581285 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:44:10 crc kubenswrapper[4846]: E1201 00:44:10.582531 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:44:21 crc kubenswrapper[4846]: I1201 00:44:21.580548 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:44:21 crc kubenswrapper[4846]: E1201 00:44:21.581292 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:44:33 crc kubenswrapper[4846]: I1201 00:44:33.581081 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:44:33 crc kubenswrapper[4846]: E1201 00:44:33.581952 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:44:47 crc kubenswrapper[4846]: I1201 00:44:47.580671 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:44:47 crc kubenswrapper[4846]: E1201 00:44:47.581562 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.167526 4846 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6"] Dec 01 00:45:00 crc kubenswrapper[4846]: E1201 00:45:00.168901 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="registry-server" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.168923 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="registry-server" Dec 01 00:45:00 crc kubenswrapper[4846]: E1201 00:45:00.168953 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="extract-content" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.168965 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="extract-content" Dec 01 00:45:00 crc kubenswrapper[4846]: E1201 00:45:00.168996 4846 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="extract-utilities" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.169007 4846 state_mem.go:107] "Deleted CPUSet assignment" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="extract-utilities" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.169417 4846 memory_manager.go:354] "RemoveStaleState removing state" podUID="429228e4-7e3d-4061-9045-0e37f9b7bc83" containerName="registry-server" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.170676 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.177222 4846 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.177356 4846 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.188442 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6"] Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.329162 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/859467ad-beb1-434d-a317-55f754babec0-secret-volume\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.329240 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859467ad-beb1-434d-a317-55f754babec0-config-volume\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.329326 4846 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckjf\" (UniqueName: \"kubernetes.io/projected/859467ad-beb1-434d-a317-55f754babec0-kube-api-access-lckjf\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.431676 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lckjf\" (UniqueName: \"kubernetes.io/projected/859467ad-beb1-434d-a317-55f754babec0-kube-api-access-lckjf\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.431992 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859467ad-beb1-434d-a317-55f754babec0-config-volume\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.432030 4846 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/859467ad-beb1-434d-a317-55f754babec0-secret-volume\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.434189 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859467ad-beb1-434d-a317-55f754babec0-config-volume\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.440477 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/859467ad-beb1-434d-a317-55f754babec0-secret-volume\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.449613 4846 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lckjf\" (UniqueName: \"kubernetes.io/projected/859467ad-beb1-434d-a317-55f754babec0-kube-api-access-lckjf\") pod \"collect-profiles-29409165-s8qw6\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.502062 4846 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.580808 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:45:00 crc kubenswrapper[4846]: E1201 00:45:00.582248 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:45:00 crc kubenswrapper[4846]: I1201 00:45:00.992927 4846 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6"] Dec 01 00:45:01 crc kubenswrapper[4846]: I1201 00:45:01.038805 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" event={"ID":"859467ad-beb1-434d-a317-55f754babec0","Type":"ContainerStarted","Data":"60aafd715a6b7f3879779a5e1e16ef074fbb5572cc00bd1e00ce165f93e2a5e6"} Dec 01 00:45:02 crc kubenswrapper[4846]: I1201 00:45:02.050934 4846 generic.go:334] "Generic (PLEG): container finished" podID="859467ad-beb1-434d-a317-55f754babec0" containerID="0693709f4c95bc85ee8b2fe99047f388095c6153f3c1d3429be9d29c5f3f13bc" exitCode=0 Dec 01 00:45:02 crc kubenswrapper[4846]: I1201 00:45:02.051000 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" event={"ID":"859467ad-beb1-434d-a317-55f754babec0","Type":"ContainerDied","Data":"0693709f4c95bc85ee8b2fe99047f388095c6153f3c1d3429be9d29c5f3f13bc"} Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.365044 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.482306 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859467ad-beb1-434d-a317-55f754babec0-config-volume\") pod \"859467ad-beb1-434d-a317-55f754babec0\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.482502 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/859467ad-beb1-434d-a317-55f754babec0-secret-volume\") pod \"859467ad-beb1-434d-a317-55f754babec0\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.482580 4846 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lckjf\" (UniqueName: \"kubernetes.io/projected/859467ad-beb1-434d-a317-55f754babec0-kube-api-access-lckjf\") pod \"859467ad-beb1-434d-a317-55f754babec0\" (UID: \"859467ad-beb1-434d-a317-55f754babec0\") " Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.483533 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859467ad-beb1-434d-a317-55f754babec0-config-volume" (OuterVolumeSpecName: "config-volume") pod "859467ad-beb1-434d-a317-55f754babec0" (UID: "859467ad-beb1-434d-a317-55f754babec0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.490286 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859467ad-beb1-434d-a317-55f754babec0-kube-api-access-lckjf" (OuterVolumeSpecName: "kube-api-access-lckjf") pod "859467ad-beb1-434d-a317-55f754babec0" (UID: "859467ad-beb1-434d-a317-55f754babec0"). InnerVolumeSpecName "kube-api-access-lckjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.491038 4846 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859467ad-beb1-434d-a317-55f754babec0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "859467ad-beb1-434d-a317-55f754babec0" (UID: "859467ad-beb1-434d-a317-55f754babec0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.585084 4846 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/859467ad-beb1-434d-a317-55f754babec0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.585168 4846 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lckjf\" (UniqueName: \"kubernetes.io/projected/859467ad-beb1-434d-a317-55f754babec0-kube-api-access-lckjf\") on node \"crc\" DevicePath \"\"" Dec 01 00:45:03 crc kubenswrapper[4846]: I1201 00:45:03.585198 4846 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859467ad-beb1-434d-a317-55f754babec0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 01 00:45:04 crc kubenswrapper[4846]: I1201 00:45:04.071936 4846 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" event={"ID":"859467ad-beb1-434d-a317-55f754babec0","Type":"ContainerDied","Data":"60aafd715a6b7f3879779a5e1e16ef074fbb5572cc00bd1e00ce165f93e2a5e6"} Dec 01 00:45:04 crc kubenswrapper[4846]: I1201 00:45:04.071994 4846 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60aafd715a6b7f3879779a5e1e16ef074fbb5572cc00bd1e00ce165f93e2a5e6" Dec 01 00:45:04 crc kubenswrapper[4846]: I1201 00:45:04.072012 4846 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29409165-s8qw6" Dec 01 00:45:04 crc kubenswrapper[4846]: I1201 00:45:04.453279 4846 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps"] Dec 01 00:45:04 crc kubenswrapper[4846]: I1201 00:45:04.463150 4846 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29409120-5nwps"] Dec 01 00:45:05 crc kubenswrapper[4846]: I1201 00:45:05.597363 4846 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31064592-6043-412a-82e6-4eb313fa16a3" path="/var/lib/kubelet/pods/31064592-6043-412a-82e6-4eb313fa16a3/volumes" Dec 01 00:45:15 crc kubenswrapper[4846]: I1201 00:45:15.580764 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:45:15 crc kubenswrapper[4846]: E1201 00:45:15.581514 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:45:29 crc kubenswrapper[4846]: I1201 00:45:29.585484 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:45:29 crc kubenswrapper[4846]: E1201 00:45:29.586304 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8" Dec 01 00:45:41 crc kubenswrapper[4846]: I1201 00:45:41.091540 4846 scope.go:117] "RemoveContainer" containerID="20b1175a3be0d6537b5c240b4f24f7956698bfafa9bf47a7c785e2b8368d96a7" Dec 01 00:45:42 crc kubenswrapper[4846]: I1201 00:45:42.603573 4846 scope.go:117] "RemoveContainer" containerID="57a742e6923a89c32709829e1d776bacc5f1fd905900b1185833c876515ae66e" Dec 01 00:45:42 crc kubenswrapper[4846]: E1201 00:45:42.604110 4846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-grqqg_openshift-machine-config-operator(d981647e-2c46-4ad1-afd7-757ef36643f8)\"" pod="openshift-machine-config-operator/machine-config-daemon-grqqg" podUID="d981647e-2c46-4ad1-afd7-757ef36643f8"